Test Report: Docker_Linux_crio 21681

                    
                      595bbf5b740d7896a57580209f3c1775d52404c7:2025-10-08:41822
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 514.91
38 TestErrorSpam/setup 496.72
47 TestFunctional/serial/StartWithProxy 501.19
49 TestFunctional/serial/SoftStart 366.18
51 TestFunctional/serial/KubectlGetPods 2.06
61 TestFunctional/serial/MinikubeKubectlCmd 2.13
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.11
63 TestFunctional/serial/ExtraConfig 734.35
64 TestFunctional/serial/ComponentHealth 1.97
67 TestFunctional/serial/InvalidService 0.05
70 TestFunctional/parallel/DashboardCmd 1.72
73 TestFunctional/parallel/StatusCmd 3.56
77 TestFunctional/parallel/ServiceCmdConnect 1.56
79 TestFunctional/parallel/PersistentVolumeClaim 241.59
83 TestFunctional/parallel/MySQL 2.1
89 TestFunctional/parallel/NodeLabels 2.41
94 TestFunctional/parallel/ServiceCmd/DeployApp 0.06
95 TestFunctional/parallel/ServiceCmd/List 0.31
96 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
98 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
99 TestFunctional/parallel/ServiceCmd/Format 0.32
101 TestFunctional/parallel/ServiceCmd/URL 0.32
103 TestFunctional/parallel/MountCmd/any-port 2.49
113 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.08
114 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.07
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 71.43
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
141 TestMultiControlPlane/serial/StartCluster 502.01
142 TestMultiControlPlane/serial/DeployApp 96.02
143 TestMultiControlPlane/serial/PingHostFromPods 1.37
144 TestMultiControlPlane/serial/AddWorkerNode 1.52
145 TestMultiControlPlane/serial/NodeLabels 1.33
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.61
147 TestMultiControlPlane/serial/CopyFile 1.59
148 TestMultiControlPlane/serial/StopSecondaryNode 1.65
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.58
150 TestMultiControlPlane/serial/RestartSecondaryNode 57.97
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.68
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.69
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.83
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.58
155 TestMultiControlPlane/serial/StopCluster 1.37
156 TestMultiControlPlane/serial/RestartCluster 368.43
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.62
158 TestMultiControlPlane/serial/AddSecondaryNode 1.56
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.62
163 TestJSONOutput/start/Command 495.61
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 501.44
221 TestMultiNode/serial/ValidateNameConflict 7200.063
x
+
TestAddons/Setup (514.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-541206 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-541206 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m34.874831388s)

                                                
                                                
-- stdout --
	* [addons-541206] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-541206" primary control-plane node in "addons-541206" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:18:18.091072  100238 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:18:18.091301  100238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:18:18.091313  100238 out.go:374] Setting ErrFile to fd 2...
	I1008 14:18:18.091319  100238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:18:18.091546  100238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:18:18.092162  100238 out.go:368] Setting JSON to false
	I1008 14:18:18.093118  100238 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7249,"bootTime":1759925849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:18:18.093219  100238 start.go:141] virtualization: kvm guest
	I1008 14:18:18.094989  100238 out.go:179] * [addons-541206] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:18:18.096427  100238 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:18:18.096502  100238 notify.go:220] Checking for updates...
	I1008 14:18:18.098664  100238 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:18:18.099846  100238 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:18:18.101140  100238 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:18:18.102406  100238 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:18:18.103544  100238 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:18:18.104907  100238 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:18:18.128118  100238 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:18:18.128281  100238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:18:18.182215  100238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-08 14:18:18.172800964 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:18:18.182325  100238 docker.go:318] overlay module found
	I1008 14:18:18.184320  100238 out.go:179] * Using the docker driver based on user configuration
	I1008 14:18:18.185491  100238 start.go:305] selected driver: docker
	I1008 14:18:18.185508  100238 start.go:925] validating driver "docker" against <nil>
	I1008 14:18:18.185520  100238 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:18:18.186088  100238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:18:18.241946  100238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-08 14:18:18.232260708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:18:18.242191  100238 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:18:18.242490  100238 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:18:18.244394  100238 out.go:179] * Using Docker driver with root privileges
	I1008 14:18:18.245671  100238 cni.go:84] Creating CNI manager for ""
	I1008 14:18:18.245736  100238 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:18:18.245751  100238 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 14:18:18.245836  100238 start.go:349] cluster config:
	{Name:addons-541206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-541206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1008 14:18:18.247108  100238 out.go:179] * Starting "addons-541206" primary control-plane node in "addons-541206" cluster
	I1008 14:18:18.248274  100238 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:18:18.249544  100238 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:18:18.250626  100238 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:18:18.250665  100238 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:18:18.250673  100238 cache.go:58] Caching tarball of preloaded images
	I1008 14:18:18.250740  100238 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:18:18.250767  100238 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:18:18.250779  100238 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:18:18.251181  100238 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/config.json ...
	I1008 14:18:18.251215  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/config.json: {Name:mka0a945b781126a967edb6141eaf8a77f2cc223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:18.266938  100238 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 14:18:18.267078  100238 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1008 14:18:18.267096  100238 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1008 14:18:18.267101  100238 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1008 14:18:18.267108  100238 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1008 14:18:18.267112  100238 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1008 14:18:31.668135  100238 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1008 14:18:31.668178  100238 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:18:31.668218  100238 start.go:360] acquireMachinesLock for addons-541206: {Name:mk3dc436fcd514f6d00eaa50068bf7b61d04b403 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:18:31.668342  100238 start.go:364] duration metric: took 98.845µs to acquireMachinesLock for "addons-541206"
	I1008 14:18:31.668373  100238 start.go:93] Provisioning new machine with config: &{Name:addons-541206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-541206 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:18:31.668433  100238 start.go:125] createHost starting for "" (driver="docker")
	I1008 14:18:31.725606  100238 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1008 14:18:31.725916  100238 start.go:159] libmachine.API.Create for "addons-541206" (driver="docker")
	I1008 14:18:31.725956  100238 client.go:168] LocalClient.Create starting
	I1008 14:18:31.726114  100238 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 14:18:32.338251  100238 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 14:18:32.598712  100238 cli_runner.go:164] Run: docker network inspect addons-541206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 14:18:32.615322  100238 cli_runner.go:211] docker network inspect addons-541206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 14:18:32.615410  100238 network_create.go:284] running [docker network inspect addons-541206] to gather additional debugging logs...
	I1008 14:18:32.615432  100238 cli_runner.go:164] Run: docker network inspect addons-541206
	W1008 14:18:32.631051  100238 cli_runner.go:211] docker network inspect addons-541206 returned with exit code 1
	I1008 14:18:32.631087  100238 network_create.go:287] error running [docker network inspect addons-541206]: docker network inspect addons-541206: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-541206 not found
	I1008 14:18:32.631102  100238 network_create.go:289] output of [docker network inspect addons-541206]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-541206 not found
	
	** /stderr **
	I1008 14:18:32.631206  100238 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:18:32.647722  100238 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000511210}
	I1008 14:18:32.647776  100238 network_create.go:124] attempt to create docker network addons-541206 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 14:18:32.647838  100238 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-541206 addons-541206
	I1008 14:18:32.776358  100238 network_create.go:108] docker network addons-541206 192.168.49.0/24 created
	I1008 14:18:32.776393  100238 kic.go:121] calculated static IP "192.168.49.2" for the "addons-541206" container
	I1008 14:18:32.776492  100238 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:18:32.792200  100238 cli_runner.go:164] Run: docker volume create addons-541206 --label name.minikube.sigs.k8s.io=addons-541206 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:18:32.848981  100238 oci.go:103] Successfully created a docker volume addons-541206
	I1008 14:18:32.849146  100238 cli_runner.go:164] Run: docker run --rm --name addons-541206-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-541206 --entrypoint /usr/bin/test -v addons-541206:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:18:36.312021  100238 cli_runner.go:217] Completed: docker run --rm --name addons-541206-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-541206 --entrypoint /usr/bin/test -v addons-541206:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (3.462819624s)
	I1008 14:18:36.312052  100238 oci.go:107] Successfully prepared a docker volume addons-541206
	I1008 14:18:36.312082  100238 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:18:36.312106  100238 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:18:36.312156  100238 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-541206:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:18:40.608495  100238 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-541206:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.296242638s)
	I1008 14:18:40.608526  100238 kic.go:203] duration metric: took 4.296418095s to extract preloaded images to volume ...
	W1008 14:18:40.608616  100238 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:18:40.608646  100238 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:18:40.608686  100238 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:18:40.667993  100238 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-541206 --name addons-541206 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-541206 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-541206 --network addons-541206 --ip 192.168.49.2 --volume addons-541206:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:18:40.941618  100238 cli_runner.go:164] Run: docker container inspect addons-541206 --format={{.State.Running}}
	I1008 14:18:40.959350  100238 cli_runner.go:164] Run: docker container inspect addons-541206 --format={{.State.Status}}
	I1008 14:18:40.976993  100238 cli_runner.go:164] Run: docker exec addons-541206 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:18:41.021381  100238 oci.go:144] the created container "addons-541206" has a running status.
	I1008 14:18:41.021417  100238 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa...
	I1008 14:18:41.203355  100238 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:18:41.233343  100238 cli_runner.go:164] Run: docker container inspect addons-541206 --format={{.State.Status}}
	I1008 14:18:41.258313  100238 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:18:41.258338  100238 kic_runner.go:114] Args: [docker exec --privileged addons-541206 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:18:41.301883  100238 cli_runner.go:164] Run: docker container inspect addons-541206 --format={{.State.Status}}
	I1008 14:18:41.321804  100238 machine.go:93] provisionDockerMachine start ...
	I1008 14:18:41.321915  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:41.338973  100238 main.go:141] libmachine: Using SSH client type: native
	I1008 14:18:41.339329  100238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 14:18:41.339348  100238 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:18:41.486205  100238 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-541206
	
	I1008 14:18:41.486235  100238 ubuntu.go:182] provisioning hostname "addons-541206"
	I1008 14:18:41.486300  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:41.503912  100238 main.go:141] libmachine: Using SSH client type: native
	I1008 14:18:41.504119  100238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 14:18:41.504133  100238 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-541206 && echo "addons-541206" | sudo tee /etc/hostname
	I1008 14:18:41.659226  100238 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-541206
	
	I1008 14:18:41.659311  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:41.676568  100238 main.go:141] libmachine: Using SSH client type: native
	I1008 14:18:41.676778  100238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 14:18:41.676794  100238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-541206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-541206/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-541206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:18:41.822227  100238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:18:41.822254  100238 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:18:41.822296  100238 ubuntu.go:190] setting up certificates
	I1008 14:18:41.822307  100238 provision.go:84] configureAuth start
	I1008 14:18:41.822360  100238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-541206
	I1008 14:18:41.839174  100238 provision.go:143] copyHostCerts
	I1008 14:18:41.839254  100238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:18:41.839391  100238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:18:41.839490  100238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:18:41.839566  100238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.addons-541206 san=[127.0.0.1 192.168.49.2 addons-541206 localhost minikube]
	I1008 14:18:42.444032  100238 provision.go:177] copyRemoteCerts
	I1008 14:18:42.444093  100238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:18:42.444137  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:42.462756  100238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa Username:docker}
	I1008 14:18:42.564826  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:18:42.584160  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 14:18:42.601741  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:18:42.619864  100238 provision.go:87] duration metric: took 797.542784ms to configureAuth
	I1008 14:18:42.619893  100238 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:18:42.620074  100238 config.go:182] Loaded profile config "addons-541206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:18:42.620200  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:42.637895  100238 main.go:141] libmachine: Using SSH client type: native
	I1008 14:18:42.638103  100238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 14:18:42.638119  100238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:18:42.891896  100238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:18:42.891926  100238 machine.go:96] duration metric: took 1.570094531s to provisionDockerMachine
	I1008 14:18:42.891939  100238 client.go:171] duration metric: took 11.165974776s to LocalClient.Create
	I1008 14:18:42.891965  100238 start.go:167] duration metric: took 11.166063256s to libmachine.API.Create "addons-541206"
	I1008 14:18:42.891980  100238 start.go:293] postStartSetup for "addons-541206" (driver="docker")
	I1008 14:18:42.891994  100238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:18:42.892061  100238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:18:42.892119  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:42.908676  100238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa Username:docker}
	I1008 14:18:43.012415  100238 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:18:43.015928  100238 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:18:43.015953  100238 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:18:43.015963  100238 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:18:43.016016  100238 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:18:43.016039  100238 start.go:296] duration metric: took 124.052858ms for postStartSetup
	I1008 14:18:43.016373  100238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-541206
	I1008 14:18:43.032730  100238 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/config.json ...
	I1008 14:18:43.032995  100238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:18:43.033038  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:43.049754  100238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa Username:docker}
	I1008 14:18:43.149585  100238 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:18:43.153977  100238 start.go:128] duration metric: took 11.485529551s to createHost
	I1008 14:18:43.154001  100238 start.go:83] releasing machines lock for "addons-541206", held for 11.485643821s
	I1008 14:18:43.154075  100238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-541206
	I1008 14:18:43.170955  100238 ssh_runner.go:195] Run: cat /version.json
	I1008 14:18:43.171011  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:43.171036  100238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:18:43.171092  100238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-541206
	I1008 14:18:43.188233  100238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa Username:docker}
	I1008 14:18:43.188702  100238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/addons-541206/id_rsa Username:docker}
	I1008 14:18:43.286541  100238 ssh_runner.go:195] Run: systemctl --version
	I1008 14:18:43.340844  100238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:18:43.376942  100238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:18:43.381600  100238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:18:43.381671  100238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:18:43.407190  100238 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:18:43.407213  100238 start.go:495] detecting cgroup driver to use...
	I1008 14:18:43.407242  100238 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:18:43.407282  100238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:18:43.422754  100238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:18:43.435150  100238 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:18:43.435201  100238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:18:43.451416  100238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:18:43.470162  100238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:18:43.550231  100238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:18:43.632514  100238 docker.go:234] disabling docker service ...
	I1008 14:18:43.632584  100238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:18:43.651033  100238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:18:43.664104  100238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:18:43.741246  100238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:18:43.823424  100238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:18:43.835667  100238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:18:43.849516  100238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:18:43.849581  100238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.860070  100238 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:18:43.860146  100238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.869473  100238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.878475  100238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.887243  100238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:18:43.895579  100238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.904615  100238 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.918943  100238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:18:43.927840  100238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:18:43.935419  100238 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 14:18:43.935501  100238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 14:18:43.947927  100238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:18:43.955593  100238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:18:44.035731  100238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:18:44.139595  100238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:18:44.139703  100238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:18:44.143611  100238 start.go:563] Will wait 60s for crictl version
	I1008 14:18:44.143674  100238 ssh_runner.go:195] Run: which crictl
	I1008 14:18:44.147246  100238 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:18:44.171463  100238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:18:44.171585  100238 ssh_runner.go:195] Run: crio --version
	I1008 14:18:44.200393  100238 ssh_runner.go:195] Run: crio --version
	I1008 14:18:44.230284  100238 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:18:44.231533  100238 cli_runner.go:164] Run: docker network inspect addons-541206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:18:44.248566  100238 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:18:44.252628  100238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:18:44.262720  100238 kubeadm.go:883] updating cluster {Name:addons-541206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-541206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:18:44.262855  100238 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:18:44.262960  100238 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:18:44.294491  100238 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:18:44.294512  100238 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:18:44.294580  100238 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:18:44.320243  100238 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:18:44.320263  100238 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:18:44.320271  100238 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 14:18:44.320368  100238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-541206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-541206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:18:44.320464  100238 ssh_runner.go:195] Run: crio config
	I1008 14:18:44.366784  100238 cni.go:84] Creating CNI manager for ""
	I1008 14:18:44.366814  100238 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:18:44.366838  100238 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:18:44.366864  100238 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-541206 NodeName:addons-541206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:18:44.367011  100238 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-541206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:18:44.367085  100238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:18:44.375633  100238 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:18:44.375708  100238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:18:44.383373  100238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1008 14:18:44.395990  100238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:18:44.411216  100238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1008 14:18:44.423844  100238 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:18:44.427542  100238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:18:44.437779  100238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:18:44.515277  100238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:18:44.538997  100238 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206 for IP: 192.168.49.2
	I1008 14:18:44.539022  100238 certs.go:195] generating shared ca certs ...
	I1008 14:18:44.539039  100238 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:44.539177  100238 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:18:44.933355  100238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt ...
	I1008 14:18:44.933385  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt: {Name:mkbc1e88b858a4bca692e02cfcde72e98d5d81c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:44.933574  100238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key ...
	I1008 14:18:44.933587  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key: {Name:mk68d8f26f9f9beecb1ead11d3cf503b954b2d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:44.933672  100238 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:18:45.017578  100238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt ...
	I1008 14:18:45.017613  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt: {Name:mkdd67c7fd0fb8d608e94225c8c81cec19f95241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.017781  100238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key ...
	I1008 14:18:45.017798  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key: {Name:mkf3c7a6d9d09dfbe43ea93082622e17f3592e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.017902  100238 certs.go:257] generating profile certs ...
	I1008 14:18:45.017963  100238 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/client.key
	I1008 14:18:45.017976  100238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/client.crt with IP's: []
	I1008 14:18:45.329089  100238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/client.crt ...
	I1008 14:18:45.329120  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/client.crt: {Name:mk4231984f90f7ff7db00b0e418303eae9d57b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.329306  100238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/client.key ...
	I1008 14:18:45.329317  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/client.key: {Name:mkf51be462358d282ded203ba196c87217e4dd06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.329396  100238 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.key.334284f9
	I1008 14:18:45.329416  100238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.crt.334284f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 14:18:45.470401  100238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.crt.334284f9 ...
	I1008 14:18:45.470436  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.crt.334284f9: {Name:mk95221fbb2c3236e3076fed9d17d9d5ad04c7a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.470620  100238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.key.334284f9 ...
	I1008 14:18:45.470633  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.key.334284f9: {Name:mka5263008409d6f4c37dcef7da3d3b9b9ef18f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.470711  100238 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.crt.334284f9 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.crt
	I1008 14:18:45.470817  100238 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.key.334284f9 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.key
	I1008 14:18:45.470886  100238 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.key
	I1008 14:18:45.470904  100238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.crt with IP's: []
	I1008 14:18:45.627396  100238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.crt ...
	I1008 14:18:45.627437  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.crt: {Name:mk37cf9272c13d4b8cb82fbd84d3c73caf8b0893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.627660  100238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.key ...
	I1008 14:18:45.627674  100238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.key: {Name:mk836355884527814a8e28b9050435676c6a9055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:18:45.627882  100238 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:18:45.627923  100238 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:18:45.627949  100238 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:18:45.627972  100238 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:18:45.628607  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:18:45.647312  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:18:45.665506  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:18:45.683012  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:18:45.700309  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 14:18:45.717819  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:18:45.735835  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:18:45.753734  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/addons-541206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 14:18:45.771170  100238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:18:45.790360  100238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:18:45.802497  100238 ssh_runner.go:195] Run: openssl version
	I1008 14:18:45.808523  100238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:18:45.820036  100238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:18:45.824023  100238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:18:45.824082  100238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:18:45.858417  100238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:18:45.867573  100238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:18:45.871545  100238 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:18:45.871595  100238 kubeadm.go:400] StartCluster: {Name:addons-541206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-541206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:18:45.871680  100238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:18:45.871730  100238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:18:45.898857  100238 cri.go:89] found id: ""
	I1008 14:18:45.898928  100238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:18:45.907129  100238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:18:45.914963  100238 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:18:45.915015  100238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:18:45.922813  100238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:18:45.922832  100238 kubeadm.go:157] found existing configuration files:
	
	I1008 14:18:45.922874  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 14:18:45.930216  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:18:45.930262  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:18:45.937984  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 14:18:45.945675  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:18:45.945738  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:18:45.953126  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 14:18:45.960899  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:18:45.960956  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:18:45.968198  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 14:18:45.975511  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:18:45.975569  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:18:45.982867  100238 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:18:46.020621  100238 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:18:46.020675  100238 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:18:46.046203  100238 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:18:46.046280  100238 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:18:46.046383  100238 kubeadm.go:318] OS: Linux
	I1008 14:18:46.046477  100238 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:18:46.046552  100238 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:18:46.046637  100238 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:18:46.046712  100238 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:18:46.046792  100238 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:18:46.046873  100238 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:18:46.046963  100238 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:18:46.047041  100238 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:18:46.105183  100238 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:18:46.105333  100238 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:18:46.105481  100238 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:18:46.112139  100238 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:18:46.115567  100238 out.go:252]   - Generating certificates and keys ...
	I1008 14:18:46.115672  100238 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:18:46.115771  100238 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:18:46.191374  100238 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 14:18:46.334069  100238 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 14:18:46.508019  100238 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 14:18:46.679713  100238 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 14:18:46.753484  100238 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 14:18:46.754314  100238 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-541206 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 14:18:47.338494  100238 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 14:18:47.338649  100238 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-541206 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 14:18:47.461909  100238 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 14:18:47.664971  100238 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 14:18:48.026430  100238 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 14:18:48.026543  100238 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:18:48.126537  100238 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:18:48.491088  100238 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:18:48.554992  100238 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:18:48.922851  100238 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:18:49.141152  100238 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:18:49.141635  100238 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:18:49.145338  100238 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:18:49.146900  100238 out.go:252]   - Booting up control plane ...
	I1008 14:18:49.147016  100238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:18:49.147118  100238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:18:49.148949  100238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:18:49.162273  100238 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:18:49.162399  100238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:18:49.168938  100238 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:18:49.169134  100238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:18:49.169211  100238 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:18:49.263632  100238 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:18:49.263777  100238 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:18:49.765374  100238 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.89157ms
	I1008 14:18:49.768220  100238 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:18:49.768343  100238 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 14:18:49.768489  100238 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:18:49.768587  100238 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:22:49.769700  100238 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000683283s
	I1008 14:22:49.769963  100238 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000691308s
	I1008 14:22:49.770153  100238 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000982734s
	I1008 14:22:49.770173  100238 kubeadm.go:318] 
	I1008 14:22:49.770373  100238 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:22:49.770614  100238 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:22:49.770815  100238 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:22:49.771065  100238 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:22:49.771226  100238 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:22:49.771419  100238 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:22:49.771434  100238 kubeadm.go:318] 
	I1008 14:22:49.774133  100238 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:22:49.774286  100238 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:22:49.774844  100238 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 14:22:49.774942  100238 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1008 14:22:49.775139  100238 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-541206 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-541206 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.89157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000683283s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000691308s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000982734s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-541206 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-541206 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.89157ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000683283s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000691308s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000982734s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:22:49.775227  100238 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:22:50.216259  100238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:22:50.228950  100238 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:22:50.228999  100238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:22:50.236974  100238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:22:50.236992  100238 kubeadm.go:157] found existing configuration files:
	
	I1008 14:22:50.237042  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 14:22:50.244777  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:22:50.244832  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:22:50.252214  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 14:22:50.259794  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:22:50.259856  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:22:50.267156  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 14:22:50.274947  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:22:50.275011  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:22:50.282528  100238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 14:22:50.290260  100238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:22:50.290318  100238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:22:50.298036  100238 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:22:50.334463  100238 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:22:50.334532  100238 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:22:50.354867  100238 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:22:50.354931  100238 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:22:50.354964  100238 kubeadm.go:318] OS: Linux
	I1008 14:22:50.355001  100238 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:22:50.355059  100238 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:22:50.355160  100238 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:22:50.355244  100238 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:22:50.355326  100238 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:22:50.355397  100238 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:22:50.355464  100238 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:22:50.355539  100238 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:22:50.413976  100238 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:22:50.414146  100238 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:22:50.414312  100238 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:22:50.420854  100238 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:22:50.424573  100238 out.go:252]   - Generating certificates and keys ...
	I1008 14:22:50.424684  100238 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:22:50.424801  100238 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:22:50.424906  100238 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:22:50.424987  100238 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:22:50.425090  100238 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:22:50.425192  100238 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:22:50.425279  100238 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:22:50.425360  100238 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:22:50.425489  100238 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:22:50.425591  100238 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:22:50.425647  100238 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:22:50.425721  100238 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:22:50.742395  100238 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:22:50.768508  100238 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:22:50.936103  100238 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:22:51.096039  100238 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:22:51.347608  100238 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:22:51.348079  100238 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:22:51.350534  100238 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:22:51.354304  100238 out.go:252]   - Booting up control plane ...
	I1008 14:22:51.354402  100238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:22:51.354512  100238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:22:51.354568  100238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:22:51.367219  100238 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:22:51.367345  100238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:22:51.373975  100238 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:22:51.374252  100238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:22:51.374334  100238 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:22:51.483093  100238 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:22:51.483274  100238 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:22:52.483976  100238 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001021524s
	I1008 14:22:52.488156  100238 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:22:52.488260  100238 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 14:22:52.488367  100238 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:22:52.488489  100238 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:26:52.488994  100238 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000586114s
	I1008 14:26:52.489139  100238 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000699898s
	I1008 14:26:52.489257  100238 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00061139s
	I1008 14:26:52.489272  100238 kubeadm.go:318] 
	I1008 14:26:52.489391  100238 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:26:52.489550  100238 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:26:52.489696  100238 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:26:52.489855  100238 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:26:52.489989  100238 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:26:52.490159  100238 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:26:52.490188  100238 kubeadm.go:318] 
	I1008 14:26:52.492932  100238 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:26:52.493030  100238 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:26:52.493742  100238 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:26:52.493824  100238 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:26:52.493912  100238 kubeadm.go:402] duration metric: took 8m6.622318036s to StartCluster
	I1008 14:26:52.493987  100238 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:26:52.494048  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:26:52.520679  100238 cri.go:89] found id: ""
	I1008 14:26:52.520735  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.520745  100238 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:26:52.520752  100238 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:26:52.520813  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:26:52.545790  100238 cri.go:89] found id: ""
	I1008 14:26:52.545814  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.545821  100238 logs.go:284] No container was found matching "etcd"
	I1008 14:26:52.545827  100238 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:26:52.545912  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:26:52.572133  100238 cri.go:89] found id: ""
	I1008 14:26:52.572163  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.572174  100238 logs.go:284] No container was found matching "coredns"
	I1008 14:26:52.572184  100238 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:26:52.572260  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:26:52.598408  100238 cri.go:89] found id: ""
	I1008 14:26:52.598460  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.598475  100238 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:26:52.598484  100238 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:26:52.598547  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:26:52.624814  100238 cri.go:89] found id: ""
	I1008 14:26:52.624841  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.624849  100238 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:26:52.624855  100238 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:26:52.624921  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:26:52.650123  100238 cri.go:89] found id: ""
	I1008 14:26:52.650152  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.650159  100238 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:26:52.650165  100238 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:26:52.650223  100238 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:26:52.678146  100238 cri.go:89] found id: ""
	I1008 14:26:52.678172  100238 logs.go:282] 0 containers: []
	W1008 14:26:52.678180  100238 logs.go:284] No container was found matching "kindnet"
	I1008 14:26:52.678191  100238 logs.go:123] Gathering logs for kubelet ...
	I1008 14:26:52.678202  100238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:26:52.746326  100238 logs.go:123] Gathering logs for dmesg ...
	I1008 14:26:52.746364  100238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:26:52.761417  100238 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:26:52.761463  100238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:26:52.819234  100238 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:26:52.812438    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.812987    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.814489    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.814841    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.816289    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:26:52.812438    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.812987    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.814489    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.814841    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 14:26:52.816289    2387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:26:52.819262  100238 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:26:52.819277  100238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:26:52.878570  100238 logs.go:123] Gathering logs for container status ...
	I1008 14:26:52.878611  100238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 14:26:52.908726  100238 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001021524s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000586114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000699898s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00061139s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 14:26:52.908778  100238 out.go:285] * 
	* 
	W1008 14:26:52.908880  100238 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001021524s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000586114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000699898s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00061139s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001021524s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000586114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000699898s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00061139s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 14:26:52.908900  100238 out.go:285] * 
	* 
	W1008 14:26:52.910731  100238 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:26:52.914580  100238 out.go:203] 
	W1008 14:26:52.916048  100238 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001021524s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000586114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000699898s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00061139s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001021524s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000586114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000699898s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00061139s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 14:26:52.916074  100238 out.go:285] * 
	* 
	I1008 14:26:52.917678  100238 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-541206 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (514.91s)

                                                
                                    
x
+
TestErrorSpam/setup (496.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-526605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-526605 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-526605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-526605 --driver=docker  --container-runtime=crio: exit status 80 (8m16.709863809s)

                                                
                                                
-- stdout --
	* [nospam-526605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-526605" primary control-plane node in "nospam-526605" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-526605] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-526605] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.857469ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000692693s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000848337s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00099519s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.681319ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000220463s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000486784s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000458435s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.681319ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000220463s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000486784s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000458435s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-526605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-526605 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-526605] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-526605] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 500.857469ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000692693s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000848337s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00099519s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.681319ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000220463s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000486784s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000458435s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.681319ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000220463s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000486784s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000458435s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-526605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21681
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-526605" primary control-plane node in "nospam-526605" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-526605] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-526605] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 500.857469ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000692693s
[control-plane-check] kube-apiserver is not healthy after 4m0.000848337s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00099519s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.681319ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000220463s
[control-plane-check] kube-scheduler is not healthy after 4m0.000486784s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000458435s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.681319ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000220463s
[control-plane-check] kube-scheduler is not healthy after 4m0.000486784s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000458435s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (496.72s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (501.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m19.929833546s)

                                                
                                                
-- stdout --
	* [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - HTTP_PROXY=localhost:38261
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:38261 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-367186 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-367186 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001089035s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000700335s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00088574s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001144449s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001867253s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000308404s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00050544s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000356299s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001867253s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000308404s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00050544s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000356299s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 6 (293.874316ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:43:42.624738  118018 status.go:458] kubeconfig endpoint: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-211325                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-211325   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ delete  │ -p download-only-840888                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-840888   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ --download-only -p download-docker-250844 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-250844 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p download-docker-250844                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-250844 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ --download-only -p binary-mirror-198013 --alsologtostderr --binary-mirror http://127.0.0.1:41765 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-198013   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p binary-mirror-198013                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-198013   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ addons  │ enable dashboard -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ addons  │ disable dashboard -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ start   │ -p addons-541206 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:26 UTC │ 08 Oct 25 14:26 UTC │
	│ start   │ -p nospam-526605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-526605 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:26 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-367186      │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:35:22
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:35:22.435394  112988 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:35:22.435670  112988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:35:22.435673  112988 out.go:374] Setting ErrFile to fd 2...
	I1008 14:35:22.435677  112988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:35:22.435888  112988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:35:22.436364  112988 out.go:368] Setting JSON to false
	I1008 14:35:22.437240  112988 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8273,"bootTime":1759925849,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:35:22.437288  112988 start.go:141] virtualization: kvm guest
	I1008 14:35:22.439513  112988 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:35:22.440806  112988 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:35:22.440895  112988 notify.go:220] Checking for updates...
	I1008 14:35:22.444068  112988 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:35:22.445399  112988 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:35:22.446556  112988 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:35:22.447675  112988 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:35:22.449104  112988 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:35:22.450461  112988 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:35:22.472740  112988 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:35:22.472875  112988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:35:22.530202  112988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:35:22.519923404 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:35:22.530331  112988 docker.go:318] overlay module found
	I1008 14:35:22.532194  112988 out.go:179] * Using the docker driver based on user configuration
	I1008 14:35:22.533622  112988 start.go:305] selected driver: docker
	I1008 14:35:22.533636  112988 start.go:925] validating driver "docker" against <nil>
	I1008 14:35:22.533651  112988 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:35:22.534489  112988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:35:22.587273  112988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:35:22.578195042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:35:22.587475  112988 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:35:22.587667  112988 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:35:22.589672  112988 out.go:179] * Using Docker driver with root privileges
	I1008 14:35:22.590894  112988 cni.go:84] Creating CNI manager for ""
	I1008 14:35:22.590949  112988 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:35:22.590955  112988 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 14:35:22.591016  112988 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:35:22.592220  112988 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:35:22.593221  112988 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:35:22.594677  112988 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:35:22.595801  112988 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:35:22.595832  112988 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:35:22.595841  112988 cache.go:58] Caching tarball of preloaded images
	I1008 14:35:22.595902  112988 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:35:22.595977  112988 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:35:22.595989  112988 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:35:22.596366  112988 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:35:22.596384  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json: {Name:mke956317a0329636687584c436bf15bc7d6cbb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:22.615236  112988 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:35:22.615248  112988 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:35:22.615263  112988 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:35:22.615286  112988 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:35:22.615380  112988 start.go:364] duration metric: took 81.822µs to acquireMachinesLock for "functional-367186"
	I1008 14:35:22.615397  112988 start.go:93] Provisioning new machine with config: &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:35:22.615459  112988 start.go:125] createHost starting for "" (driver="docker")
	I1008 14:35:22.617603  112988 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1008 14:35:22.617860  112988 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:38261 to docker env.
	I1008 14:35:22.617887  112988 start.go:159] libmachine.API.Create for "functional-367186" (driver="docker")
	I1008 14:35:22.617908  112988 client.go:168] LocalClient.Create starting
	I1008 14:35:22.617978  112988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 14:35:22.618009  112988 main.go:141] libmachine: Decoding PEM data...
	I1008 14:35:22.618021  112988 main.go:141] libmachine: Parsing certificate...
	I1008 14:35:22.618078  112988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 14:35:22.618106  112988 main.go:141] libmachine: Decoding PEM data...
	I1008 14:35:22.618113  112988 main.go:141] libmachine: Parsing certificate...
	I1008 14:35:22.618953  112988 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 14:35:22.634923  112988 cli_runner.go:211] docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 14:35:22.634995  112988 network_create.go:284] running [docker network inspect functional-367186] to gather additional debugging logs...
	I1008 14:35:22.635010  112988 cli_runner.go:164] Run: docker network inspect functional-367186
	W1008 14:35:22.651008  112988 cli_runner.go:211] docker network inspect functional-367186 returned with exit code 1
	I1008 14:35:22.651027  112988 network_create.go:287] error running [docker network inspect functional-367186]: docker network inspect functional-367186: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-367186 not found
	I1008 14:35:22.651042  112988 network_create.go:289] output of [docker network inspect functional-367186]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-367186 not found
	
	** /stderr **
	I1008 14:35:22.651184  112988 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:35:22.667014  112988 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c86f50}
	I1008 14:35:22.667039  112988 network_create.go:124] attempt to create docker network functional-367186 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 14:35:22.667088  112988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-367186 functional-367186
	I1008 14:35:22.721848  112988 network_create.go:108] docker network functional-367186 192.168.49.0/24 created
	I1008 14:35:22.721875  112988 kic.go:121] calculated static IP "192.168.49.2" for the "functional-367186" container
	I1008 14:35:22.721939  112988 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 14:35:22.738377  112988 cli_runner.go:164] Run: docker volume create functional-367186 --label name.minikube.sigs.k8s.io=functional-367186 --label created_by.minikube.sigs.k8s.io=true
	I1008 14:35:22.755965  112988 oci.go:103] Successfully created a docker volume functional-367186
	I1008 14:35:22.756030  112988 cli_runner.go:164] Run: docker run --rm --name functional-367186-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-367186 --entrypoint /usr/bin/test -v functional-367186:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 14:35:23.136777  112988 oci.go:107] Successfully prepared a docker volume functional-367186
	I1008 14:35:23.136809  112988 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:35:23.136829  112988 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 14:35:23.136893  112988 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-367186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 14:35:27.460397  112988 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-367186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.323469274s)
	I1008 14:35:27.460419  112988 kic.go:203] duration metric: took 4.323586658s to extract preloaded images to volume ...
	W1008 14:35:27.460536  112988 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 14:35:27.460563  112988 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 14:35:27.460597  112988 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 14:35:27.515588  112988 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-367186 --name functional-367186 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-367186 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-367186 --network functional-367186 --ip 192.168.49.2 --volume functional-367186:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 14:35:27.783254  112988 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Running}}
	I1008 14:35:27.801510  112988 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:35:27.818964  112988 cli_runner.go:164] Run: docker exec functional-367186 stat /var/lib/dpkg/alternatives/iptables
	I1008 14:35:27.866866  112988 oci.go:144] the created container "functional-367186" has a running status.
	I1008 14:35:27.866888  112988 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa...
	I1008 14:35:27.926270  112988 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 14:35:27.958432  112988 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:35:27.975532  112988 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 14:35:27.975543  112988 kic_runner.go:114] Args: [docker exec --privileged functional-367186 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 14:35:28.014134  112988 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:35:28.034275  112988 machine.go:93] provisionDockerMachine start ...
	I1008 14:35:28.034373  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:28.053295  112988 main.go:141] libmachine: Using SSH client type: native
	I1008 14:35:28.053554  112988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:35:28.053563  112988 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:35:28.054239  112988 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55038->127.0.0.1:32778: read: connection reset by peer
	I1008 14:35:31.200596  112988 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:35:31.200625  112988 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:35:31.200704  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:31.217815  112988 main.go:141] libmachine: Using SSH client type: native
	I1008 14:35:31.218035  112988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:35:31.218043  112988 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:35:31.374967  112988 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:35:31.375029  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:31.393159  112988 main.go:141] libmachine: Using SSH client type: native
	I1008 14:35:31.393366  112988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:35:31.393377  112988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:35:31.542603  112988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:35:31.542624  112988 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:35:31.542642  112988 ubuntu.go:190] setting up certificates
	I1008 14:35:31.542652  112988 provision.go:84] configureAuth start
	I1008 14:35:31.542702  112988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:35:31.560049  112988 provision.go:143] copyHostCerts
	I1008 14:35:31.560101  112988 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:35:31.560111  112988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:35:31.560193  112988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:35:31.560291  112988 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:35:31.560294  112988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:35:31.560319  112988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:35:31.560396  112988 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:35:31.560400  112988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:35:31.560421  112988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:35:31.560521  112988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:35:31.910906  112988 provision.go:177] copyRemoteCerts
	I1008 14:35:31.910957  112988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:35:31.910999  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:31.927737  112988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:35:32.030877  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:35:32.051467  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:35:32.069532  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:35:32.087137  112988 provision.go:87] duration metric: took 544.472172ms to configureAuth
	I1008 14:35:32.087157  112988 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:35:32.087315  112988 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:35:32.087417  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:32.104750  112988 main.go:141] libmachine: Using SSH client type: native
	I1008 14:35:32.104973  112988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:35:32.104985  112988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:35:32.359142  112988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:35:32.359160  112988 machine.go:96] duration metric: took 4.324869428s to provisionDockerMachine
	I1008 14:35:32.359171  112988 client.go:171] duration metric: took 9.74125783s to LocalClient.Create
	I1008 14:35:32.359188  112988 start.go:167] duration metric: took 9.741298703s to libmachine.API.Create "functional-367186"
	I1008 14:35:32.359196  112988 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:35:32.359207  112988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:35:32.359289  112988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:35:32.359329  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:32.376135  112988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:35:32.480672  112988 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:35:32.484303  112988 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:35:32.484319  112988 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:35:32.484329  112988 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:35:32.484381  112988 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:35:32.484480  112988 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:35:32.484561  112988 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:35:32.484618  112988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:35:32.492555  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:35:32.512838  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:35:32.530647  112988 start.go:296] duration metric: took 171.434772ms for postStartSetup
	I1008 14:35:32.530994  112988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:35:32.547889  112988 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:35:32.548141  112988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:35:32.548186  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:32.564863  112988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:35:32.664684  112988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:35:32.669440  112988 start.go:128] duration metric: took 10.053966446s to createHost
	I1008 14:35:32.669469  112988 start.go:83] releasing machines lock for "functional-367186", held for 10.054082427s
	I1008 14:35:32.669555  112988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:35:32.687705  112988 out.go:179] * Found network options:
	I1008 14:35:32.689073  112988 out.go:179]   - HTTP_PROXY=localhost:38261
	W1008 14:35:32.690278  112988 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1008 14:35:32.691806  112988 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1008 14:35:32.693307  112988 ssh_runner.go:195] Run: cat /version.json
	I1008 14:35:32.693347  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:32.693391  112988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:35:32.693435  112988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:35:32.711944  112988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:35:32.712169  112988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:35:32.866683  112988 ssh_runner.go:195] Run: systemctl --version
	I1008 14:35:32.873396  112988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:35:32.909475  112988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:35:32.914526  112988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:35:32.914576  112988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:35:32.940870  112988 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:35:32.940885  112988 start.go:495] detecting cgroup driver to use...
	I1008 14:35:32.940914  112988 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:35:32.940962  112988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:35:32.957923  112988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:35:32.970738  112988 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:35:32.970781  112988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:35:32.987752  112988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:35:33.005893  112988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:35:33.089665  112988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:35:33.178483  112988 docker.go:234] disabling docker service ...
	I1008 14:35:33.178533  112988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:35:33.196957  112988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:35:33.209926  112988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:35:33.293682  112988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:35:33.372247  112988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:35:33.384647  112988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:35:33.398919  112988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:35:33.398986  112988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.409495  112988 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:35:33.409579  112988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.418541  112988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.427177  112988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.435852  112988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:35:33.444196  112988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.452831  112988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.466496  112988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:35:33.475757  112988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:35:33.483353  112988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:35:33.491030  112988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:35:33.568509  112988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:35:33.671196  112988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:35:33.671245  112988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:35:33.675200  112988 start.go:563] Will wait 60s for crictl version
	I1008 14:35:33.675250  112988 ssh_runner.go:195] Run: which crictl
	I1008 14:35:33.678610  112988 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:35:33.702162  112988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:35:33.702243  112988 ssh_runner.go:195] Run: crio --version
	I1008 14:35:33.730184  112988 ssh_runner.go:195] Run: crio --version
	I1008 14:35:33.759675  112988 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:35:33.760911  112988 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:35:33.777638  112988 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:35:33.781878  112988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:35:33.792166  112988 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:35:33.792286  112988 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:35:33.792331  112988 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:35:33.822364  112988 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:35:33.822379  112988 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:35:33.822431  112988 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:35:33.847813  112988 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:35:33.847825  112988 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:35:33.847832  112988 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:35:33.847942  112988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:35:33.847999  112988 ssh_runner.go:195] Run: crio config
	I1008 14:35:33.892496  112988 cni.go:84] Creating CNI manager for ""
	I1008 14:35:33.892511  112988 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:35:33.892529  112988 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:35:33.892548  112988 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:35:33.892669  112988 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:35:33.892732  112988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:35:33.900710  112988 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:35:33.900758  112988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:35:33.908062  112988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:35:33.920092  112988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:35:33.935149  112988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:35:33.947464  112988 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:35:33.951318  112988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:35:33.962310  112988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:35:34.042618  112988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:35:34.073058  112988 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:35:34.073072  112988 certs.go:195] generating shared ca certs ...
	I1008 14:35:34.073087  112988 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.073226  112988 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:35:34.073256  112988 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:35:34.073263  112988 certs.go:257] generating profile certs ...
	I1008 14:35:34.073310  112988 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:35:34.073327  112988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt with IP's: []
	I1008 14:35:34.136335  112988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt ...
	I1008 14:35:34.136352  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: {Name:mk357d662f4f83246c2cbde6e2fb0e0111dacc8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.136564  112988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key ...
	I1008 14:35:34.136573  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key: {Name:mkcfdab29fe7c0dd9bfa9b7041c6adffad8fe504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.136661  112988 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:35:34.136672  112988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt.36811b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 14:35:34.231094  112988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt.36811b31 ...
	I1008 14:35:34.231111  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt.36811b31: {Name:mkf93c3bfab97c4eb8bbee001ef1156907314d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.231290  112988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31 ...
	I1008 14:35:34.231297  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31: {Name:mk1404ed6c9eb5e8857edd4b2a019cf8da0c3de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.231371  112988 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt.36811b31 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt
	I1008 14:35:34.231457  112988 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key
	I1008 14:35:34.231513  112988 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:35:34.231523  112988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt with IP's: []
	I1008 14:35:34.451257  112988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt ...
	I1008 14:35:34.451274  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt: {Name:mk70bf88e986a620c09ccdd3d56b55222d2d0406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.451437  112988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key ...
	I1008 14:35:34.451457  112988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key: {Name:mkcdd1374b8a94042f9c872c5078155ff314b451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:35:34.451636  112988 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:35:34.451667  112988 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:35:34.451675  112988 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:35:34.451697  112988 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:35:34.451715  112988 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:35:34.451732  112988 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:35:34.451764  112988 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:35:34.452356  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:35:34.470463  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:35:34.487225  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:35:34.504164  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:35:34.520782  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:35:34.537402  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:35:34.554428  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:35:34.571098  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:35:34.588114  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:35:34.607342  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:35:34.624595  112988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:35:34.641633  112988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:35:34.654068  112988 ssh_runner.go:195] Run: openssl version
	I1008 14:35:34.659981  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:35:34.668456  112988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:35:34.671991  112988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:35:34.672038  112988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:35:34.705394  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:35:34.714305  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:35:34.723102  112988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:35:34.726873  112988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:35:34.726915  112988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:35:34.761542  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:35:34.770550  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:35:34.779371  112988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:35:34.783280  112988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:35:34.783322  112988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:35:34.819567  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:35:34.828779  112988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:35:34.832356  112988 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:35:34.832406  112988 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:35:34.832581  112988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:35:34.832642  112988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:35:34.859328  112988 cri.go:89] found id: ""
	I1008 14:35:34.859389  112988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:35:34.868286  112988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:35:34.876290  112988 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:35:34.876332  112988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:35:34.884482  112988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:35:34.884491  112988 kubeadm.go:157] found existing configuration files:
	
	I1008 14:35:34.884529  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:35:34.892204  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:35:34.892244  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:35:34.899662  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:35:34.907205  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:35:34.907261  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:35:34.914577  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:35:34.922049  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:35:34.922090  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:35:34.929255  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:35:34.936870  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:35:34.936912  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:35:34.944376  112988 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:35:35.002354  112988 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:35:35.059198  112988 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:39:39.009569  112988 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 14:39:39.009778  112988 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:39:39.012046  112988 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:39:39.012108  112988 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:39:39.012213  112988 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:39:39.012355  112988 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:39:39.012477  112988 kubeadm.go:318] OS: Linux
	I1008 14:39:39.012543  112988 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:39:39.012600  112988 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:39:39.012650  112988 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:39:39.012726  112988 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:39:39.012780  112988 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:39:39.012869  112988 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:39:39.012919  112988 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:39:39.012958  112988 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:39:39.013013  112988 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:39:39.013086  112988 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:39:39.013155  112988 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:39:39.013206  112988 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:39:39.015590  112988 out.go:252]   - Generating certificates and keys ...
	I1008 14:39:39.015651  112988 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:39:39.015717  112988 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:39:39.015768  112988 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 14:39:39.015843  112988 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 14:39:39.015900  112988 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 14:39:39.015973  112988 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 14:39:39.016019  112988 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 14:39:39.016175  112988 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-367186 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 14:39:39.016221  112988 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 14:39:39.016319  112988 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-367186 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 14:39:39.016368  112988 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 14:39:39.016459  112988 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 14:39:39.016519  112988 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 14:39:39.016566  112988 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:39:39.016613  112988 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:39:39.016661  112988 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:39:39.016701  112988 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:39:39.016757  112988 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:39:39.016798  112988 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:39:39.016860  112988 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:39:39.016922  112988 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:39:39.018375  112988 out.go:252]   - Booting up control plane ...
	I1008 14:39:39.018474  112988 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:39:39.018542  112988 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:39:39.018595  112988 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:39:39.018686  112988 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:39:39.018771  112988 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:39:39.018888  112988 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:39:39.018957  112988 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:39:39.018986  112988 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:39:39.019106  112988 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:39:39.019192  112988 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:39:39.019239  112988 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001089035s
	I1008 14:39:39.019322  112988 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:39:39.019393  112988 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:39:39.019489  112988 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:39:39.019549  112988 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:39:39.019611  112988 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000700335s
	I1008 14:39:39.019670  112988 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00088574s
	I1008 14:39:39.019728  112988 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001144449s
	I1008 14:39:39.019731  112988 kubeadm.go:318] 
	I1008 14:39:39.019806  112988 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:39:39.019867  112988 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:39:39.019945  112988 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:39:39.020050  112988 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:39:39.020114  112988 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:39:39.020178  112988 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:39:39.020215  112988 kubeadm.go:318] 
	W1008 14:39:39.020346  112988 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-367186 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-367186 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001089035s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000700335s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00088574s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001144449s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:39:39.020433  112988 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:39:39.470367  112988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:39:39.483288  112988 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:39:39.483328  112988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:39:39.491561  112988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:39:39.491569  112988 kubeadm.go:157] found existing configuration files:
	
	I1008 14:39:39.491608  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:39:39.499306  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:39:39.499355  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:39:39.506972  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:39:39.514856  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:39:39.514915  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:39:39.522695  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:39:39.530537  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:39:39.530582  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:39:39.538336  112988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:39:39.546325  112988 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:39:39.546371  112988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:39:39.554055  112988 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:39:39.592183  112988 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:39:39.592237  112988 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:39:39.612788  112988 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:39:39.612852  112988 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:39:39.612878  112988 kubeadm.go:318] OS: Linux
	I1008 14:39:39.612935  112988 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:39:39.612995  112988 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:39:39.613058  112988 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:39:39.613097  112988 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:39:39.613138  112988 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:39:39.613210  112988 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:39:39.613276  112988 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:39:39.613314  112988 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:39:39.670679  112988 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:39:39.670824  112988 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:39:39.670966  112988 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:39:39.678139  112988 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:39:39.680855  112988 out.go:252]   - Generating certificates and keys ...
	I1008 14:39:39.680932  112988 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:39:39.681002  112988 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:39:39.681108  112988 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:39:39.681167  112988 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:39:39.681271  112988 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:39:39.681347  112988 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:39:39.681418  112988 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:39:39.681494  112988 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:39:39.681594  112988 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:39:39.681694  112988 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:39:39.681723  112988 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:39:39.681766  112988 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:39:40.115889  112988 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:39:40.344166  112988 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:39:40.382270  112988 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:39:40.699882  112988 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:39:40.743685  112988 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:39:40.744149  112988 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:39:40.746466  112988 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:39:40.749639  112988 out.go:252]   - Booting up control plane ...
	I1008 14:39:40.749744  112988 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:39:40.749853  112988 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:39:40.750353  112988 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:39:40.764158  112988 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:39:40.764248  112988 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:39:40.771289  112988 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:39:40.771565  112988 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:39:40.771633  112988 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:39:40.878087  112988 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:39:40.878205  112988 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:39:41.879836  112988 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001867253s
	I1008 14:39:41.884575  112988 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:39:41.884763  112988 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:39:41.884954  112988 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:39:41.885103  112988 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:43:41.885026  112988 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000308404s
	I1008 14:43:41.885120  112988 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00050544s
	I1008 14:43:41.885196  112988 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000356299s
	I1008 14:43:41.885199  112988 kubeadm.go:318] 
	I1008 14:43:41.885355  112988 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:43:41.885490  112988 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:43:41.885556  112988 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:43:41.885635  112988 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:43:41.885697  112988 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:43:41.885776  112988 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:43:41.885779  112988 kubeadm.go:318] 
	I1008 14:43:41.888978  112988 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:43:41.889078  112988 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:43:41.889573  112988 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:43:41.889655  112988 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:43:41.889719  112988 kubeadm.go:402] duration metric: took 8m7.057316986s to StartCluster
	I1008 14:43:41.889788  112988 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:43:41.889850  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:43:41.916253  112988 cri.go:89] found id: ""
	I1008 14:43:41.916276  112988 logs.go:282] 0 containers: []
	W1008 14:43:41.916285  112988 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:43:41.916292  112988 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:43:41.916359  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:43:41.943480  112988 cri.go:89] found id: ""
	I1008 14:43:41.943501  112988 logs.go:282] 0 containers: []
	W1008 14:43:41.943511  112988 logs.go:284] No container was found matching "etcd"
	I1008 14:43:41.943518  112988 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:43:41.943587  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:43:41.969045  112988 cri.go:89] found id: ""
	I1008 14:43:41.969063  112988 logs.go:282] 0 containers: []
	W1008 14:43:41.969069  112988 logs.go:284] No container was found matching "coredns"
	I1008 14:43:41.969075  112988 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:43:41.969138  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:43:41.995584  112988 cri.go:89] found id: ""
	I1008 14:43:41.995598  112988 logs.go:282] 0 containers: []
	W1008 14:43:41.995605  112988 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:43:41.995609  112988 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:43:41.995654  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:43:42.021265  112988 cri.go:89] found id: ""
	I1008 14:43:42.021280  112988 logs.go:282] 0 containers: []
	W1008 14:43:42.021289  112988 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:43:42.021296  112988 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:43:42.021345  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:43:42.047193  112988 cri.go:89] found id: ""
	I1008 14:43:42.047209  112988 logs.go:282] 0 containers: []
	W1008 14:43:42.047216  112988 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:43:42.047223  112988 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:43:42.047274  112988 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:43:42.074592  112988 cri.go:89] found id: ""
	I1008 14:43:42.074607  112988 logs.go:282] 0 containers: []
	W1008 14:43:42.074614  112988 logs.go:284] No container was found matching "kindnet"
	I1008 14:43:42.074623  112988 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:43:42.074634  112988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:43:42.138362  112988 logs.go:123] Gathering logs for container status ...
	I1008 14:43:42.138386  112988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:43:42.168165  112988 logs.go:123] Gathering logs for kubelet ...
	I1008 14:43:42.168182  112988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:43:42.238090  112988 logs.go:123] Gathering logs for dmesg ...
	I1008 14:43:42.238116  112988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:43:42.252357  112988 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:43:42.252377  112988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:43:42.309793  112988 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:43:42.302736    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.303327    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.304901    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.305292    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.306900    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:43:42.302736    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.303327    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.304901    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.305292    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:42.306900    2436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1008 14:43:42.309837  112988 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001867253s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000308404s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00050544s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000356299s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 14:43:42.309883  112988 out.go:285] * 
	W1008 14:43:42.309945  112988 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001867253s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000308404s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00050544s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000356299s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 14:43:42.309959  112988 out.go:285] * 
	W1008 14:43:42.311846  112988 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:43:42.315171  112988 out.go:203] 
	W1008 14:43:42.316197  112988 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001867253s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000308404s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00050544s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000356299s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 14:43:42.316217  112988 out.go:285] * 
	I1008 14:43:42.317638  112988 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.469739908Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e084946f-7631-416a-853e-ecc6cd8ac6cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.470232771Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e60f12bb-05a9-4646-9425-cd45f3ac8720 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471342793Z" level=info msg="createCtr: deleting container ID d58ead477c9fac9b9ea6f668a5646ff9d39963395c9bf490e5c051b5469c8342 from idIndex" id=46480348-cb53-44b4-ae48-72a54ac8761e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471388083Z" level=info msg="createCtr: removing container d58ead477c9fac9b9ea6f668a5646ff9d39963395c9bf490e5c051b5469c8342" id=46480348-cb53-44b4-ae48-72a54ac8761e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471417166Z" level=info msg="createCtr: deleting container d58ead477c9fac9b9ea6f668a5646ff9d39963395c9bf490e5c051b5469c8342 from storage" id=46480348-cb53-44b4-ae48-72a54ac8761e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471366028Z" level=info msg="createCtr: deleting container ID c0e5bce357cbb0e405df1fd474c57ec999e5772ff6e4a685630638c9d59dbaae from idIndex" id=e084946f-7631-416a-853e-ecc6cd8ac6cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471536405Z" level=info msg="createCtr: removing container c0e5bce357cbb0e405df1fd474c57ec999e5772ff6e4a685630638c9d59dbaae" id=e084946f-7631-416a-853e-ecc6cd8ac6cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471571741Z" level=info msg="createCtr: deleting container c0e5bce357cbb0e405df1fd474c57ec999e5772ff6e4a685630638c9d59dbaae from storage" id=e084946f-7631-416a-853e-ecc6cd8ac6cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.47179282Z" level=info msg="createCtr: deleting container ID 1702f5dfeecf5ce3cd6288c2180370106e9fb6c0d07a7be5c7028857b377ee00 from idIndex" id=e60f12bb-05a9-4646-9425-cd45f3ac8720 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471817758Z" level=info msg="createCtr: removing container 1702f5dfeecf5ce3cd6288c2180370106e9fb6c0d07a7be5c7028857b377ee00" id=e60f12bb-05a9-4646-9425-cd45f3ac8720 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.471841545Z" level=info msg="createCtr: deleting container 1702f5dfeecf5ce3cd6288c2180370106e9fb6c0d07a7be5c7028857b377ee00 from storage" id=e60f12bb-05a9-4646-9425-cd45f3ac8720 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.475048984Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=46480348-cb53-44b4-ae48-72a54ac8761e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.476544454Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-367186_kube-system_c58427f58fdd58b4fdb4fadaedd99fdb_0" id=e084946f-7631-416a-853e-ecc6cd8ac6cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:33 functional-367186 crio[796]: time="2025-10-08T14:43:33.476927574Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=e60f12bb-05a9-4646-9425-cd45f3ac8720 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.436960026Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f016d314-4ef2-4afb-aa20-34dbf0fc7ba8 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.439006608Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8f3427d0-3adc-49e9-babb-163093a5ac48 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.439874831Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-367186/kube-scheduler" id=1a1c56d1-48a1-4f5a-af25-4ad61e6bf2c4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.440109802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.443104728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.443492546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.459683681Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1a1c56d1-48a1-4f5a-af25-4ad61e6bf2c4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.46106988Z" level=info msg="createCtr: deleting container ID 84dd901cb9d8a29c3d08485921a108822e177a09bd1e683c54e420286a295b24 from idIndex" id=1a1c56d1-48a1-4f5a-af25-4ad61e6bf2c4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.461110916Z" level=info msg="createCtr: removing container 84dd901cb9d8a29c3d08485921a108822e177a09bd1e683c54e420286a295b24" id=1a1c56d1-48a1-4f5a-af25-4ad61e6bf2c4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.461141644Z" level=info msg="createCtr: deleting container 84dd901cb9d8a29c3d08485921a108822e177a09bd1e683c54e420286a295b24 from storage" id=1a1c56d1-48a1-4f5a-af25-4ad61e6bf2c4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:43:38 functional-367186 crio[796]: time="2025-10-08T14:43:38.463209188Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-367186_kube-system_72fbb4fed11a83b82d196f480544c561_0" id=1a1c56d1-48a1-4f5a-af25-4ad61e6bf2c4 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:43:43.196315    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:43.196934    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:43.198520    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:43.199567    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:43:43.201196    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 14:43:43 up  2:26,  0 user,  load average: 0.01, 0.08, 0.66
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 14:43:33 functional-367186 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c58427f58fdd58b4fdb4fadaedd99fdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:43:33 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:43:33 functional-367186 kubelet[1801]: E1008 14:43:33.476941    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c58427f58fdd58b4fdb4fadaedd99fdb"
	Oct 08 14:43:33 functional-367186 kubelet[1801]: E1008 14:43:33.477127    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:43:33 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:43:33 functional-367186 kubelet[1801]:  > podSandboxID="4a13bc9351a22b93554dcee46226666905c4e1638ab46a476341d1435096d9d8"
	Oct 08 14:43:33 functional-367186 kubelet[1801]: E1008 14:43:33.477205    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:43:33 functional-367186 kubelet[1801]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:43:33 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:43:33 functional-367186 kubelet[1801]: E1008 14:43:33.478359    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 14:43:35 functional-367186 kubelet[1801]: E1008 14:43:35.063734    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-367186&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: E1008 14:43:38.059726    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: I1008 14:43:38.219546    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: E1008 14:43:38.219890    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: E1008 14:43:38.436520    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: E1008 14:43:38.463522    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:43:38 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:43:38 functional-367186 kubelet[1801]:  > podSandboxID="c0e5f3cd2b90a2545cb343765bc3b9be24372f306973786fac682f615775a4ff"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: E1008 14:43:38.463628    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:43:38 functional-367186 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:43:38 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:43:38 functional-367186 kubelet[1801]: E1008 14:43:38.463658    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 14:43:39 functional-367186 kubelet[1801]: E1008 14:43:39.634194    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8afed11699ef  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:39:41.429266927 +0000 UTC m=+0.550355432,LastTimestamp:2025-10-08 14:39:41.429266927 +0000 UTC m=+0.550355432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	Oct 08 14:43:41 functional-367186 kubelet[1801]: E1008 14:43:41.454540    1801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 14:43:42 functional-367186 kubelet[1801]: E1008 14:43:42.030418    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 6 (287.047135ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:43:43.570397  118344 status.go:458] kubeconfig endpoint: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (501.19s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 14:43:43.585475   98900 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-367186 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-367186 --alsologtostderr -v=8: exit status 80 (6m3.726467056s)

                                                
                                                
-- stdout --
	* [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:43:43.627861  118459 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:43:43.627954  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.627958  118459 out.go:374] Setting ErrFile to fd 2...
	I1008 14:43:43.627962  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.628171  118459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:43:43.628614  118459 out.go:368] Setting JSON to false
	I1008 14:43:43.629495  118459 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8775,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:43:43.629593  118459 start.go:141] virtualization: kvm guest
	I1008 14:43:43.631500  118459 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:43:43.632767  118459 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:43:43.632773  118459 notify.go:220] Checking for updates...
	I1008 14:43:43.634937  118459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:43:43.636218  118459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:43.640666  118459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:43:43.642185  118459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:43:43.643421  118459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:43:43.644930  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:43.645039  118459 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:43:43.667985  118459 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:43:43.668119  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.723136  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.713080092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.723287  118459 docker.go:318] overlay module found
	I1008 14:43:43.725936  118459 out.go:179] * Using the docker driver based on existing profile
	I1008 14:43:43.727069  118459 start.go:305] selected driver: docker
	I1008 14:43:43.727087  118459 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.727171  118459 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:43:43.727263  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.781426  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.772365606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.782086  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:43.782179  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:43.782243  118459 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.784039  118459 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:43:43.785148  118459 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:43:43.786245  118459 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:43:43.787146  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:43.787178  118459 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:43:43.787189  118459 cache.go:58] Caching tarball of preloaded images
	I1008 14:43:43.787237  118459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:43:43.787273  118459 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:43:43.787283  118459 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:43:43.787359  118459 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:43:43.806536  118459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:43:43.806562  118459 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:43:43.806584  118459 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:43:43.806623  118459 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:43:43.806704  118459 start.go:364] duration metric: took 49.444µs to acquireMachinesLock for "functional-367186"
	I1008 14:43:43.806736  118459 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:43:43.806747  118459 fix.go:54] fixHost starting: 
	I1008 14:43:43.806975  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:43.822750  118459 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:43:43.822776  118459 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:43:43.824577  118459 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:43:43.824603  118459 machine.go:93] provisionDockerMachine start ...
	I1008 14:43:43.824673  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:43.841160  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:43.841463  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:43.841483  118459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:43:43.985591  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:43.985624  118459 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:43:43.985682  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.003073  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.003294  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.003316  118459 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:43:44.156671  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:44.156765  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.173583  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.173820  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.173845  118459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:43:44.319171  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:43:44.319200  118459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:43:44.319238  118459 ubuntu.go:190] setting up certificates
	I1008 14:43:44.319253  118459 provision.go:84] configureAuth start
	I1008 14:43:44.319306  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:44.337134  118459 provision.go:143] copyHostCerts
	I1008 14:43:44.337168  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337204  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:43:44.337226  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337295  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:43:44.337373  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337398  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:43:44.337405  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337431  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:43:44.337503  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337524  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:43:44.337531  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337557  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:43:44.337611  118459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:43:44.449681  118459 provision.go:177] copyRemoteCerts
	I1008 14:43:44.449756  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:43:44.449792  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.466984  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:44.569881  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:43:44.569953  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:43:44.587517  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:43:44.587583  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:43:44.605065  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:43:44.605124  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:43:44.622323  118459 provision.go:87] duration metric: took 303.055536ms to configureAuth
	I1008 14:43:44.622354  118459 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:43:44.622537  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:44.622644  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.639387  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.639612  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.639636  118459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:43:44.900547  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:43:44.900571  118459 machine.go:96] duration metric: took 1.07595926s to provisionDockerMachine
	I1008 14:43:44.900586  118459 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:43:44.900600  118459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:43:44.900655  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:43:44.900706  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.917783  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.020925  118459 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:43:45.024356  118459 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1008 14:43:45.024381  118459 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1008 14:43:45.024389  118459 command_runner.go:130] > VERSION_ID="12"
	I1008 14:43:45.024395  118459 command_runner.go:130] > VERSION="12 (bookworm)"
	I1008 14:43:45.024402  118459 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1008 14:43:45.024406  118459 command_runner.go:130] > ID=debian
	I1008 14:43:45.024410  118459 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1008 14:43:45.024415  118459 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1008 14:43:45.024420  118459 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1008 14:43:45.024512  118459 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:43:45.024537  118459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:43:45.024550  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:43:45.024614  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:43:45.024709  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:43:45.024722  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 14:43:45.024832  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:43:45.024842  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> /etc/test/nested/copy/98900/hosts
	I1008 14:43:45.024895  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:43:45.032438  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:45.049657  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:43:45.066943  118459 start.go:296] duration metric: took 166.34143ms for postStartSetup
	I1008 14:43:45.067016  118459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:43:45.067050  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.084921  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.184592  118459 command_runner.go:130] > 50%
	I1008 14:43:45.184676  118459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:43:45.188918  118459 command_runner.go:130] > 148G
	I1008 14:43:45.189157  118459 fix.go:56] duration metric: took 1.382403598s for fixHost
	I1008 14:43:45.189184  118459 start.go:83] releasing machines lock for "functional-367186", held for 1.382467794s
	I1008 14:43:45.189256  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:45.206786  118459 ssh_runner.go:195] Run: cat /version.json
	I1008 14:43:45.206834  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.206924  118459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:43:45.207047  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.224940  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.226308  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.323475  118459 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1008 14:43:45.323661  118459 ssh_runner.go:195] Run: systemctl --version
	I1008 14:43:45.374536  118459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 14:43:45.376350  118459 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1008 14:43:45.376387  118459 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1008 14:43:45.376484  118459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:43:45.412862  118459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:43:45.417295  118459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 14:43:45.417656  118459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:43:45.417717  118459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:43:45.425598  118459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:43:45.425618  118459 start.go:495] detecting cgroup driver to use...
	I1008 14:43:45.425645  118459 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:43:45.425686  118459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:43:45.440680  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:43:45.452844  118459 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:43:45.452899  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:43:45.466598  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:43:45.477998  118459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:43:45.564577  118459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:43:45.653273  118459 docker.go:234] disabling docker service ...
	I1008 14:43:45.653343  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:43:45.667540  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:43:45.679916  118459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:43:45.764673  118459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:43:45.852326  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:43:45.864944  118459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:43:45.878738  118459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 14:43:45.878793  118459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:43:45.878844  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.887987  118459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:43:45.888052  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.896857  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.905895  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.914639  118459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:43:45.922953  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.931880  118459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.940059  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.948635  118459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:43:45.955347  118459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 14:43:45.956050  118459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:43:45.963162  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.045488  118459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:43:46.156934  118459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:43:46.156997  118459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:43:46.161038  118459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 14:43:46.161067  118459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 14:43:46.161077  118459 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1008 14:43:46.161086  118459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.161094  118459 command_runner.go:130] > Access: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161118  118459 command_runner.go:130] > Modify: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161129  118459 command_runner.go:130] > Change: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161138  118459 command_runner.go:130] >  Birth: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161173  118459 start.go:563] Will wait 60s for crictl version
	I1008 14:43:46.161212  118459 ssh_runner.go:195] Run: which crictl
	I1008 14:43:46.164650  118459 command_runner.go:130] > /usr/local/bin/crictl
	I1008 14:43:46.164746  118459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:43:46.189255  118459 command_runner.go:130] > Version:  0.1.0
	I1008 14:43:46.189279  118459 command_runner.go:130] > RuntimeName:  cri-o
	I1008 14:43:46.189294  118459 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1008 14:43:46.189299  118459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 14:43:46.189317  118459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:43:46.189365  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.215704  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.215734  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.215741  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.215746  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.215750  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.215755  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.215762  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.215770  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.215806  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.215819  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.215825  118459 command_runner.go:130] >      static
	I1008 14:43:46.215835  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.215846  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.215857  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.215867  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.215877  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.215885  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.215897  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.215909  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.215921  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.217136  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.243203  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.243231  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.243241  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.243249  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.243256  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.243264  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.243272  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.243281  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.243293  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.243299  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.243304  118459 command_runner.go:130] >      static
	I1008 14:43:46.243312  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.243317  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.243327  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.243336  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.243348  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.243358  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.243374  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.243382  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.243390  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.246714  118459 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:43:46.248034  118459 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:43:46.264534  118459 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:43:46.268778  118459 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1008 14:43:46.268905  118459 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:43:46.269051  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:46.269113  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.298040  118459 command_runner.go:130] > {
	I1008 14:43:46.298059  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.298064  118459 command_runner.go:130] >     {
	I1008 14:43:46.298072  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.298077  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298082  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.298087  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298091  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298100  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.298109  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.298112  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298117  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.298121  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298138  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298146  118459 command_runner.go:130] >     },
	I1008 14:43:46.298151  118459 command_runner.go:130] >     {
	I1008 14:43:46.298164  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.298170  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298175  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.298181  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298185  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298191  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.298201  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.298207  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298210  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.298217  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298225  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298234  118459 command_runner.go:130] >     },
	I1008 14:43:46.298243  118459 command_runner.go:130] >     {
	I1008 14:43:46.298255  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.298262  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298267  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.298273  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298277  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298283  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.298293  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.298298  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298302  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.298309  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.298315  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298323  118459 command_runner.go:130] >     },
	I1008 14:43:46.298328  118459 command_runner.go:130] >     {
	I1008 14:43:46.298341  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.298350  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298359  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.298362  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298371  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298380  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.298387  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.298393  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298398  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.298408  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298417  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298425  118459 command_runner.go:130] >       },
	I1008 14:43:46.298438  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298461  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298467  118459 command_runner.go:130] >     },
	I1008 14:43:46.298472  118459 command_runner.go:130] >     {
	I1008 14:43:46.298481  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.298490  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298499  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.298507  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298514  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298521  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.298532  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.298540  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298548  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.298557  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298566  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298573  118459 command_runner.go:130] >       },
	I1008 14:43:46.298579  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298588  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298597  118459 command_runner.go:130] >     },
	I1008 14:43:46.298602  118459 command_runner.go:130] >     {
	I1008 14:43:46.298612  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.298619  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298628  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.298636  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298647  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298662  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.298676  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.298684  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298690  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.298699  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298705  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298713  118459 command_runner.go:130] >       },
	I1008 14:43:46.298725  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298735  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298744  118459 command_runner.go:130] >     },
	I1008 14:43:46.298752  118459 command_runner.go:130] >     {
	I1008 14:43:46.298762  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.298784  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298800  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.298808  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298815  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298829  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.298843  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.298851  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298860  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.298864  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298867  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298871  118459 command_runner.go:130] >     },
	I1008 14:43:46.298882  118459 command_runner.go:130] >     {
	I1008 14:43:46.298891  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.298895  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298899  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.298903  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298907  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298914  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.298931  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.298937  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298941  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.298948  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298952  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298957  118459 command_runner.go:130] >       },
	I1008 14:43:46.298961  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298967  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298971  118459 command_runner.go:130] >     },
	I1008 14:43:46.298978  118459 command_runner.go:130] >     {
	I1008 14:43:46.298987  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.298996  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.299004  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.299025  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299035  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.299047  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.299060  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.299068  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299074  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.299081  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.299087  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.299095  118459 command_runner.go:130] >       },
	I1008 14:43:46.299100  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.299108  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.299113  118459 command_runner.go:130] >     }
	I1008 14:43:46.299117  118459 command_runner.go:130] >   ]
	I1008 14:43:46.299125  118459 command_runner.go:130] > }
	I1008 14:43:46.300090  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.300109  118459 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:43:46.300168  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.325949  118459 command_runner.go:130] > {
	I1008 14:43:46.325970  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.325974  118459 command_runner.go:130] >     {
	I1008 14:43:46.325985  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.325990  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.325996  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.325999  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326003  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326016  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.326031  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.326040  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326047  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.326055  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326063  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326068  118459 command_runner.go:130] >     },
	I1008 14:43:46.326072  118459 command_runner.go:130] >     {
	I1008 14:43:46.326083  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.326089  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326094  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.326100  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326104  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326125  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.326136  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.326142  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326147  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.326151  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326158  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326163  118459 command_runner.go:130] >     },
	I1008 14:43:46.326166  118459 command_runner.go:130] >     {
	I1008 14:43:46.326172  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.326178  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326183  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.326188  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326192  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326201  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.326208  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.326213  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326219  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.326223  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.326226  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326229  118459 command_runner.go:130] >     },
	I1008 14:43:46.326232  118459 command_runner.go:130] >     {
	I1008 14:43:46.326238  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.326245  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326249  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.326252  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326256  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326262  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.326269  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.326275  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326279  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.326284  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326287  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326293  118459 command_runner.go:130] >       },
	I1008 14:43:46.326307  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326314  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326317  118459 command_runner.go:130] >     },
	I1008 14:43:46.326320  118459 command_runner.go:130] >     {
	I1008 14:43:46.326326  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.326331  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326335  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.326338  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326342  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326349  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.326358  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.326361  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326366  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.326369  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326373  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326378  118459 command_runner.go:130] >       },
	I1008 14:43:46.326382  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326385  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326392  118459 command_runner.go:130] >     },
	I1008 14:43:46.326395  118459 command_runner.go:130] >     {
	I1008 14:43:46.326401  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.326407  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326412  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.326415  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326419  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326429  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.326436  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.326453  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326460  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.326468  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326472  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326475  118459 command_runner.go:130] >       },
	I1008 14:43:46.326479  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326490  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326496  118459 command_runner.go:130] >     },
	I1008 14:43:46.326499  118459 command_runner.go:130] >     {
	I1008 14:43:46.326505  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.326511  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326515  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.326518  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326522  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326531  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.326538  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.326543  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326548  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.326551  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326555  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326558  118459 command_runner.go:130] >     },
	I1008 14:43:46.326561  118459 command_runner.go:130] >     {
	I1008 14:43:46.326567  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.326571  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326575  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.326578  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326582  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326588  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.326611  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.326617  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326621  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.326625  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326631  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326634  118459 command_runner.go:130] >       },
	I1008 14:43:46.326638  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326643  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326646  118459 command_runner.go:130] >     },
	I1008 14:43:46.326650  118459 command_runner.go:130] >     {
	I1008 14:43:46.326655  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.326666  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326673  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.326676  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326680  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326688  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.326698  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.326705  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326709  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.326714  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326718  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.326722  118459 command_runner.go:130] >       },
	I1008 14:43:46.326726  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326732  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.326735  118459 command_runner.go:130] >     }
	I1008 14:43:46.326738  118459 command_runner.go:130] >   ]
	I1008 14:43:46.326740  118459 command_runner.go:130] > }
	I1008 14:43:46.326842  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.326863  118459 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:43:46.326869  118459 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:43:46.326972  118459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:43:46.327030  118459 ssh_runner.go:195] Run: crio config
	I1008 14:43:46.368296  118459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 14:43:46.368332  118459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 14:43:46.368340  118459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 14:43:46.368344  118459 command_runner.go:130] > #
	I1008 14:43:46.368350  118459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 14:43:46.368356  118459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 14:43:46.368362  118459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 14:43:46.368376  118459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 14:43:46.368381  118459 command_runner.go:130] > # reload'.
	I1008 14:43:46.368392  118459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 14:43:46.368405  118459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 14:43:46.368418  118459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 14:43:46.368433  118459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 14:43:46.368458  118459 command_runner.go:130] > [crio]
	I1008 14:43:46.368472  118459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 14:43:46.368480  118459 command_runner.go:130] > # containers images, in this directory.
	I1008 14:43:46.368492  118459 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1008 14:43:46.368502  118459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 14:43:46.368514  118459 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1008 14:43:46.368525  118459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 14:43:46.368536  118459 command_runner.go:130] > # imagestore = ""
	I1008 14:43:46.368546  118459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 14:43:46.368559  118459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 14:43:46.368566  118459 command_runner.go:130] > # storage_driver = "overlay"
	I1008 14:43:46.368580  118459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 14:43:46.368587  118459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 14:43:46.368594  118459 command_runner.go:130] > # storage_option = [
	I1008 14:43:46.368599  118459 command_runner.go:130] > # ]
	I1008 14:43:46.368608  118459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 14:43:46.368621  118459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 14:43:46.368631  118459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 14:43:46.368640  118459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 14:43:46.368651  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 14:43:46.368666  118459 command_runner.go:130] > # always happen on a node reboot
	I1008 14:43:46.368678  118459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 14:43:46.368702  118459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 14:43:46.368714  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 14:43:46.368726  118459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 14:43:46.368736  118459 command_runner.go:130] > # version_file_persist = ""
	I1008 14:43:46.368751  118459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 14:43:46.368767  118459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 14:43:46.368775  118459 command_runner.go:130] > # internal_wipe = true
	I1008 14:43:46.368791  118459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 14:43:46.368802  118459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 14:43:46.368820  118459 command_runner.go:130] > # internal_repair = true
	I1008 14:43:46.368834  118459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 14:43:46.368847  118459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 14:43:46.368859  118459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 14:43:46.368869  118459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 14:43:46.368882  118459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 14:43:46.368891  118459 command_runner.go:130] > [crio.api]
	I1008 14:43:46.368900  118459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 14:43:46.368910  118459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 14:43:46.368921  118459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 14:43:46.368931  118459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 14:43:46.368942  118459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 14:43:46.368954  118459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 14:43:46.368963  118459 command_runner.go:130] > # stream_port = "0"
	I1008 14:43:46.368971  118459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 14:43:46.368981  118459 command_runner.go:130] > # stream_enable_tls = false
	I1008 14:43:46.368992  118459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 14:43:46.369002  118459 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 14:43:46.369012  118459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 14:43:46.369025  118459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369033  118459 command_runner.go:130] > # stream_tls_cert = ""
	I1008 14:43:46.369043  118459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 14:43:46.369055  118459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369075  118459 command_runner.go:130] > # stream_tls_key = ""
	I1008 14:43:46.369092  118459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 14:43:46.369106  118459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 14:43:46.369121  118459 command_runner.go:130] > # automatically pick up the changes.
	I1008 14:43:46.369130  118459 command_runner.go:130] > # stream_tls_ca = ""
	I1008 14:43:46.369153  118459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369163  118459 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1008 14:43:46.369176  118459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369186  118459 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1008 14:43:46.369197  118459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 14:43:46.369209  118459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 14:43:46.369219  118459 command_runner.go:130] > [crio.runtime]
	I1008 14:43:46.369229  118459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 14:43:46.369240  118459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 14:43:46.369246  118459 command_runner.go:130] > # "nofile=1024:2048"
	I1008 14:43:46.369260  118459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 14:43:46.369269  118459 command_runner.go:130] > # default_ulimits = [
	I1008 14:43:46.369275  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369288  118459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 14:43:46.369296  118459 command_runner.go:130] > # no_pivot = false
	I1008 14:43:46.369305  118459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 14:43:46.369317  118459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 14:43:46.369327  118459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 14:43:46.369338  118459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 14:43:46.369348  118459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 14:43:46.369359  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369368  118459 command_runner.go:130] > # conmon = ""
	I1008 14:43:46.369375  118459 command_runner.go:130] > # Cgroup setting for conmon
	I1008 14:43:46.369386  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 14:43:46.369393  118459 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 14:43:46.369402  118459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 14:43:46.369410  118459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 14:43:46.369421  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369430  118459 command_runner.go:130] > # conmon_env = [
	I1008 14:43:46.369435  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369456  118459 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 14:43:46.369465  118459 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 14:43:46.369475  118459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 14:43:46.369484  118459 command_runner.go:130] > # default_env = [
	I1008 14:43:46.369489  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369498  118459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 14:43:46.369516  118459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 14:43:46.369528  118459 command_runner.go:130] > # selinux = false
	I1008 14:43:46.369539  118459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 14:43:46.369555  118459 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1008 14:43:46.369564  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369570  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.369582  118459 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1008 14:43:46.369602  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369609  118459 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1008 14:43:46.369619  118459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 14:43:46.369631  118459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 14:43:46.369644  118459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 14:43:46.369653  118459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 14:43:46.369661  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369672  118459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 14:43:46.369680  118459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 14:43:46.369690  118459 command_runner.go:130] > # the cgroup blockio controller.
	I1008 14:43:46.369697  118459 command_runner.go:130] > # blockio_config_file = ""
	I1008 14:43:46.369709  118459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 14:43:46.369718  118459 command_runner.go:130] > # blockio parameters.
	I1008 14:43:46.369724  118459 command_runner.go:130] > # blockio_reload = false
	I1008 14:43:46.369735  118459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 14:43:46.369744  118459 command_runner.go:130] > # irqbalance daemon.
	I1008 14:43:46.369857  118459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 14:43:46.369873  118459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 14:43:46.369884  118459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 14:43:46.369898  118459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 14:43:46.369909  118459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 14:43:46.369924  118459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 14:43:46.369934  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369943  118459 command_runner.go:130] > # rdt_config_file = ""
	I1008 14:43:46.369950  118459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 14:43:46.369959  118459 command_runner.go:130] > # cgroup_manager = "systemd"
	I1008 14:43:46.369968  118459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 14:43:46.369979  118459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 14:43:46.369989  118459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 14:43:46.370002  118459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 14:43:46.370011  118459 command_runner.go:130] > # will be added.
	I1008 14:43:46.370027  118459 command_runner.go:130] > # default_capabilities = [
	I1008 14:43:46.370036  118459 command_runner.go:130] > # 	"CHOWN",
	I1008 14:43:46.370044  118459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 14:43:46.370051  118459 command_runner.go:130] > # 	"FSETID",
	I1008 14:43:46.370054  118459 command_runner.go:130] > # 	"FOWNER",
	I1008 14:43:46.370062  118459 command_runner.go:130] > # 	"SETGID",
	I1008 14:43:46.370083  118459 command_runner.go:130] > # 	"SETUID",
	I1008 14:43:46.370093  118459 command_runner.go:130] > # 	"SETPCAP",
	I1008 14:43:46.370099  118459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 14:43:46.370108  118459 command_runner.go:130] > # 	"KILL",
	I1008 14:43:46.370113  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370127  118459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 14:43:46.370140  118459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 14:43:46.370152  118459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 14:43:46.370164  118459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 14:43:46.370173  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370183  118459 command_runner.go:130] > default_sysctls = [
	I1008 14:43:46.370193  118459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 14:43:46.370198  118459 command_runner.go:130] > ]
	I1008 14:43:46.370209  118459 command_runner.go:130] > # List of devices on the host that a
	I1008 14:43:46.370249  118459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 14:43:46.370259  118459 command_runner.go:130] > # allowed_devices = [
	I1008 14:43:46.370266  118459 command_runner.go:130] > # 	"/dev/fuse",
	I1008 14:43:46.370270  118459 command_runner.go:130] > # 	"/dev/net/tun",
	I1008 14:43:46.370277  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370285  118459 command_runner.go:130] > # List of additional devices. specified as
	I1008 14:43:46.370300  118459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 14:43:46.370312  118459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 14:43:46.370324  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370333  118459 command_runner.go:130] > # additional_devices = [
	I1008 14:43:46.370341  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370351  118459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 14:43:46.370360  118459 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 14:43:46.370366  118459 command_runner.go:130] > # 	"/etc/cdi",
	I1008 14:43:46.370370  118459 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 14:43:46.370378  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370387  118459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 14:43:46.370400  118459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 14:43:46.370411  118459 command_runner.go:130] > # Defaults to false.
	I1008 14:43:46.370422  118459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 14:43:46.370434  118459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 14:43:46.370462  118459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 14:43:46.370470  118459 command_runner.go:130] > # hooks_dir = [
	I1008 14:43:46.370481  118459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 14:43:46.370491  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370503  118459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 14:43:46.370515  118459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 14:43:46.370526  118459 command_runner.go:130] > # its default mounts from the following two files:
	I1008 14:43:46.370532  118459 command_runner.go:130] > #
	I1008 14:43:46.370538  118459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 14:43:46.370550  118459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 14:43:46.370562  118459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 14:43:46.370571  118459 command_runner.go:130] > #
	I1008 14:43:46.370580  118459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 14:43:46.370593  118459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 14:43:46.370605  118459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 14:43:46.370615  118459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 14:43:46.370623  118459 command_runner.go:130] > #
	I1008 14:43:46.370629  118459 command_runner.go:130] > # default_mounts_file = ""
	I1008 14:43:46.370637  118459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 14:43:46.370647  118459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 14:43:46.370657  118459 command_runner.go:130] > # pids_limit = -1
	I1008 14:43:46.370667  118459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 14:43:46.370679  118459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 14:43:46.370693  118459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 14:43:46.370708  118459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 14:43:46.370717  118459 command_runner.go:130] > # log_size_max = -1
	I1008 14:43:46.370728  118459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 14:43:46.370735  118459 command_runner.go:130] > # log_to_journald = false
	I1008 14:43:46.370743  118459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 14:43:46.370755  118459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 14:43:46.370763  118459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 14:43:46.370774  118459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 14:43:46.370785  118459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 14:43:46.370794  118459 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 14:43:46.370804  118459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 14:43:46.370819  118459 command_runner.go:130] > # read_only = false
	I1008 14:43:46.370828  118459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 14:43:46.370841  118459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 14:43:46.370850  118459 command_runner.go:130] > # live configuration reload.
	I1008 14:43:46.370856  118459 command_runner.go:130] > # log_level = "info"
	I1008 14:43:46.370868  118459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 14:43:46.370884  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.370893  118459 command_runner.go:130] > # log_filter = ""
	I1008 14:43:46.370905  118459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370917  118459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 14:43:46.370923  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370934  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.370943  118459 command_runner.go:130] > # uid_mappings = ""
	I1008 14:43:46.370955  118459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370967  118459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 14:43:46.370979  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370994  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371003  118459 command_runner.go:130] > # gid_mappings = ""
	I1008 14:43:46.371012  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 14:43:46.371023  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371037  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371055  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371064  118459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 14:43:46.371076  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 14:43:46.371087  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371100  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371112  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371122  118459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 14:43:46.371134  118459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 14:43:46.371146  118459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 14:43:46.371158  118459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 14:43:46.371168  118459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 14:43:46.371179  118459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 14:43:46.371188  118459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 14:43:46.371193  118459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 14:43:46.371204  118459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 14:43:46.371214  118459 command_runner.go:130] > # drop_infra_ctr = true
	I1008 14:43:46.371224  118459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 14:43:46.371235  118459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 14:43:46.371249  118459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 14:43:46.371258  118459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 14:43:46.371276  118459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 14:43:46.371285  118459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 14:43:46.371294  118459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 14:43:46.371306  118459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 14:43:46.371316  118459 command_runner.go:130] > # shared_cpuset = ""
	I1008 14:43:46.371326  118459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 14:43:46.371337  118459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 14:43:46.371346  118459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 14:43:46.371358  118459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 14:43:46.371366  118459 command_runner.go:130] > # pinns_path = ""
	I1008 14:43:46.371374  118459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 14:43:46.371385  118459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 14:43:46.371395  118459 command_runner.go:130] > # enable_criu_support = true
	I1008 14:43:46.371405  118459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 14:43:46.371417  118459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 14:43:46.371422  118459 command_runner.go:130] > # enable_pod_events = false
	I1008 14:43:46.371434  118459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 14:43:46.371453  118459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 14:43:46.371465  118459 command_runner.go:130] > # default_runtime = "crun"
	I1008 14:43:46.371473  118459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 14:43:46.371484  118459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 14:43:46.371501  118459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 14:43:46.371511  118459 command_runner.go:130] > # creation as a file is not desired either.
	I1008 14:43:46.371526  118459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 14:43:46.371537  118459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 14:43:46.371545  118459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 14:43:46.371552  118459 command_runner.go:130] > # ]
	I1008 14:43:46.371559  118459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 14:43:46.371568  118459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 14:43:46.371574  118459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 14:43:46.371579  118459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 14:43:46.371584  118459 command_runner.go:130] > #
	I1008 14:43:46.371589  118459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 14:43:46.371595  118459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 14:43:46.371599  118459 command_runner.go:130] > # runtime_type = "oci"
	I1008 14:43:46.371606  118459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 14:43:46.371610  118459 command_runner.go:130] > # inherit_default_runtime = false
	I1008 14:43:46.371621  118459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 14:43:46.371628  118459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 14:43:46.371633  118459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 14:43:46.371639  118459 command_runner.go:130] > # monitor_env = []
	I1008 14:43:46.371643  118459 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 14:43:46.371649  118459 command_runner.go:130] > # allowed_annotations = []
	I1008 14:43:46.371654  118459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 14:43:46.371660  118459 command_runner.go:130] > # no_sync_log = false
	I1008 14:43:46.371664  118459 command_runner.go:130] > # default_annotations = {}
	I1008 14:43:46.371672  118459 command_runner.go:130] > # stream_websockets = false
	I1008 14:43:46.371676  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.371698  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.371705  118459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 14:43:46.371711  118459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 14:43:46.371719  118459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 14:43:46.371727  118459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 14:43:46.371731  118459 command_runner.go:130] > #   in $PATH.
	I1008 14:43:46.371736  118459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 14:43:46.371743  118459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 14:43:46.371748  118459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 14:43:46.371753  118459 command_runner.go:130] > #   state.
	I1008 14:43:46.371759  118459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 14:43:46.371767  118459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 14:43:46.371772  118459 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1008 14:43:46.371780  118459 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1008 14:43:46.371785  118459 command_runner.go:130] > #   the values from the default runtime on load time.
	I1008 14:43:46.371793  118459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 14:43:46.371801  118459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 14:43:46.371819  118459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 14:43:46.371827  118459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 14:43:46.371832  118459 command_runner.go:130] > #   The currently recognized values are:
	I1008 14:43:46.371840  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 14:43:46.371846  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 14:43:46.371854  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 14:43:46.371859  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 14:43:46.371869  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 14:43:46.371877  118459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 14:43:46.371885  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 14:43:46.371894  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 14:43:46.371900  118459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 14:43:46.371908  118459 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1008 14:43:46.371917  118459 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1008 14:43:46.371926  118459 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1008 14:43:46.371937  118459 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1008 14:43:46.371943  118459 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1008 14:43:46.371951  118459 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1008 14:43:46.371958  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1008 14:43:46.371966  118459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 14:43:46.371973  118459 command_runner.go:130] > #   deprecated option "conmon".
	I1008 14:43:46.371980  118459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 14:43:46.371987  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 14:43:46.371993  118459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 14:43:46.372000  118459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 14:43:46.372006  118459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1008 14:43:46.372013  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 14:43:46.372019  118459 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1008 14:43:46.372025  118459 command_runner.go:130] > #   conmon-rs by using:
	I1008 14:43:46.372032  118459 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1008 14:43:46.372041  118459 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1008 14:43:46.372050  118459 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1008 14:43:46.372060  118459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 14:43:46.372067  118459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 14:43:46.372073  118459 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1008 14:43:46.372083  118459 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1008 14:43:46.372090  118459 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1008 14:43:46.372097  118459 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1008 14:43:46.372107  118459 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1008 14:43:46.372116  118459 command_runner.go:130] > #   when a machine crash happens.
	I1008 14:43:46.372125  118459 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1008 14:43:46.372132  118459 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1008 14:43:46.372139  118459 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1008 14:43:46.372145  118459 command_runner.go:130] > #   seccomp profile for the runtime.
	I1008 14:43:46.372151  118459 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1008 14:43:46.372160  118459 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1008 14:43:46.372165  118459 command_runner.go:130] > #
	I1008 14:43:46.372170  118459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 14:43:46.372175  118459 command_runner.go:130] > #
	I1008 14:43:46.372181  118459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 14:43:46.372187  118459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 14:43:46.372192  118459 command_runner.go:130] > #
	I1008 14:43:46.372198  118459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 14:43:46.372205  118459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 14:43:46.372208  118459 command_runner.go:130] > #
	I1008 14:43:46.372214  118459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 14:43:46.372219  118459 command_runner.go:130] > # feature.
	I1008 14:43:46.372222  118459 command_runner.go:130] > #
	I1008 14:43:46.372228  118459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 14:43:46.372235  118459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 14:43:46.372242  118459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 14:43:46.372251  118459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 14:43:46.372259  118459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 14:43:46.372261  118459 command_runner.go:130] > #
	I1008 14:43:46.372267  118459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 14:43:46.372275  118459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 14:43:46.372281  118459 command_runner.go:130] > #
	I1008 14:43:46.372286  118459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 14:43:46.372294  118459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 14:43:46.372297  118459 command_runner.go:130] > #
	I1008 14:43:46.372302  118459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 14:43:46.372310  118459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 14:43:46.372314  118459 command_runner.go:130] > # limitation.
	I1008 14:43:46.372320  118459 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1008 14:43:46.372325  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1008 14:43:46.372330  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372334  118459 command_runner.go:130] > runtime_root = "/run/crun"
	I1008 14:43:46.372343  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372349  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372353  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372358  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372363  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372367  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372374  118459 command_runner.go:130] > allowed_annotations = [
	I1008 14:43:46.372380  118459 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1008 14:43:46.372384  118459 command_runner.go:130] > ]
	I1008 14:43:46.372391  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372395  118459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 14:43:46.372402  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1008 14:43:46.372406  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372411  118459 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 14:43:46.372415  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372422  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372425  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372432  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372436  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372453  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372461  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372473  118459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 14:43:46.372482  118459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 14:43:46.372491  118459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 14:43:46.372498  118459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 14:43:46.372509  118459 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1008 14:43:46.372520  118459 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1008 14:43:46.372530  118459 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1008 14:43:46.372537  118459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 14:43:46.372545  118459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 14:43:46.372555  118459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 14:43:46.372562  118459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 14:43:46.372569  118459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 14:43:46.372574  118459 command_runner.go:130] > # Example:
	I1008 14:43:46.372578  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 14:43:46.372585  118459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 14:43:46.372591  118459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 14:43:46.372602  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 14:43:46.372608  118459 command_runner.go:130] > # cpuset = "0-1"
	I1008 14:43:46.372612  118459 command_runner.go:130] > # cpushares = "5"
	I1008 14:43:46.372617  118459 command_runner.go:130] > # cpuquota = "1000"
	I1008 14:43:46.372621  118459 command_runner.go:130] > # cpuperiod = "100000"
	I1008 14:43:46.372626  118459 command_runner.go:130] > # cpulimit = "35"
	I1008 14:43:46.372630  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.372634  118459 command_runner.go:130] > # The workload name is workload-type.
	I1008 14:43:46.372643  118459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 14:43:46.372650  118459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 14:43:46.372655  118459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 14:43:46.372665  118459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 14:43:46.372682  118459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 14:43:46.372689  118459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 14:43:46.372695  118459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 14:43:46.372701  118459 command_runner.go:130] > # Default value is set to true
	I1008 14:43:46.372706  118459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 14:43:46.372713  118459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 14:43:46.372717  118459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 14:43:46.372724  118459 command_runner.go:130] > # Default value is set to 'false'
	I1008 14:43:46.372728  118459 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 14:43:46.372735  118459 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1008 14:43:46.372743  118459 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1008 14:43:46.372748  118459 command_runner.go:130] > # timezone = ""
	I1008 14:43:46.372756  118459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 14:43:46.372761  118459 command_runner.go:130] > #
	I1008 14:43:46.372767  118459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 14:43:46.372775  118459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1008 14:43:46.372781  118459 command_runner.go:130] > [crio.image]
	I1008 14:43:46.372786  118459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 14:43:46.372792  118459 command_runner.go:130] > # default_transport = "docker://"
	I1008 14:43:46.372798  118459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 14:43:46.372822  118459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372828  118459 command_runner.go:130] > # global_auth_file = ""
	I1008 14:43:46.372833  118459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 14:43:46.372840  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372844  118459 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.372853  118459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 14:43:46.372861  118459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372871  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372877  118459 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 14:43:46.372883  118459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 14:43:46.372888  118459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 14:43:46.372896  118459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 14:43:46.372902  118459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 14:43:46.372908  118459 command_runner.go:130] > # pause_command = "/pause"
	I1008 14:43:46.372914  118459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 14:43:46.372922  118459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 14:43:46.372927  118459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 14:43:46.372935  118459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 14:43:46.372940  118459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 14:43:46.372948  118459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 14:43:46.372952  118459 command_runner.go:130] > # pinned_images = [
	I1008 14:43:46.372958  118459 command_runner.go:130] > # ]
	I1008 14:43:46.372963  118459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 14:43:46.372972  118459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 14:43:46.372978  118459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 14:43:46.372986  118459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 14:43:46.372991  118459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 14:43:46.372997  118459 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1008 14:43:46.373003  118459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 14:43:46.373012  118459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 14:43:46.373021  118459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 14:43:46.373029  118459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 14:43:46.373034  118459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 14:43:46.373042  118459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 14:43:46.373051  118459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 14:43:46.373058  118459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 14:43:46.373065  118459 command_runner.go:130] > # changing them here.
	I1008 14:43:46.373070  118459 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1008 14:43:46.373076  118459 command_runner.go:130] > # insecure_registries = [
	I1008 14:43:46.373079  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373087  118459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 14:43:46.373095  118459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 14:43:46.373104  118459 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 14:43:46.373112  118459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 14:43:46.373116  118459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 14:43:46.373124  118459 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1008 14:43:46.373130  118459 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1008 14:43:46.373134  118459 command_runner.go:130] > # auto_reload_registries = false
	I1008 14:43:46.373142  118459 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1008 14:43:46.373149  118459 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1008 14:43:46.373157  118459 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1008 14:43:46.373162  118459 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1008 14:43:46.373168  118459 command_runner.go:130] > # The mode of short name resolution.
	I1008 14:43:46.373174  118459 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1008 14:43:46.373183  118459 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1008 14:43:46.373190  118459 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1008 14:43:46.373195  118459 command_runner.go:130] > # short_name_mode = "enforcing"
	I1008 14:43:46.373204  118459 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1008 14:43:46.373212  118459 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1008 14:43:46.373216  118459 command_runner.go:130] > # oci_artifact_mount_support = true
	I1008 14:43:46.373224  118459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 14:43:46.373228  118459 command_runner.go:130] > # CNI plugins.
	I1008 14:43:46.373234  118459 command_runner.go:130] > [crio.network]
	I1008 14:43:46.373239  118459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 14:43:46.373246  118459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 14:43:46.373251  118459 command_runner.go:130] > # cni_default_network = ""
	I1008 14:43:46.373259  118459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 14:43:46.373266  118459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 14:43:46.373271  118459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 14:43:46.373277  118459 command_runner.go:130] > # plugin_dirs = [
	I1008 14:43:46.373280  118459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 14:43:46.373284  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373289  118459 command_runner.go:130] > # List of included pod metrics.
	I1008 14:43:46.373295  118459 command_runner.go:130] > # included_pod_metrics = [
	I1008 14:43:46.373297  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373304  118459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 14:43:46.373310  118459 command_runner.go:130] > [crio.metrics]
	I1008 14:43:46.373314  118459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 14:43:46.373320  118459 command_runner.go:130] > # enable_metrics = false
	I1008 14:43:46.373324  118459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 14:43:46.373331  118459 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 14:43:46.373337  118459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 14:43:46.373347  118459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 14:43:46.373355  118459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 14:43:46.373359  118459 command_runner.go:130] > # metrics_collectors = [
	I1008 14:43:46.373364  118459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 14:43:46.373368  118459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 14:43:46.373371  118459 command_runner.go:130] > # 	"containers_oom_total",
	I1008 14:43:46.373374  118459 command_runner.go:130] > # 	"processes_defunct",
	I1008 14:43:46.373378  118459 command_runner.go:130] > # 	"operations_total",
	I1008 14:43:46.373381  118459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 14:43:46.373386  118459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 14:43:46.373389  118459 command_runner.go:130] > # 	"operations_errors_total",
	I1008 14:43:46.373393  118459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 14:43:46.373397  118459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 14:43:46.373400  118459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 14:43:46.373408  118459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 14:43:46.373411  118459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 14:43:46.373415  118459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 14:43:46.373422  118459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 14:43:46.373426  118459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 14:43:46.373430  118459 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1008 14:43:46.373436  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373450  118459 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1008 14:43:46.373460  118459 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1008 14:43:46.373468  118459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 14:43:46.373475  118459 command_runner.go:130] > # metrics_port = 9090
	I1008 14:43:46.373480  118459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 14:43:46.373486  118459 command_runner.go:130] > # metrics_socket = ""
	I1008 14:43:46.373490  118459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 14:43:46.373499  118459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 14:43:46.373508  118459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 14:43:46.373514  118459 command_runner.go:130] > # certificate on any modification event.
	I1008 14:43:46.373518  118459 command_runner.go:130] > # metrics_cert = ""
	I1008 14:43:46.373525  118459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 14:43:46.373530  118459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 14:43:46.373536  118459 command_runner.go:130] > # metrics_key = ""
	I1008 14:43:46.373542  118459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 14:43:46.373548  118459 command_runner.go:130] > [crio.tracing]
	I1008 14:43:46.373554  118459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 14:43:46.373564  118459 command_runner.go:130] > # enable_tracing = false
	I1008 14:43:46.373571  118459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 14:43:46.373576  118459 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1008 14:43:46.373584  118459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 14:43:46.373591  118459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 14:43:46.373598  118459 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 14:43:46.373604  118459 command_runner.go:130] > [crio.nri]
	I1008 14:43:46.373608  118459 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 14:43:46.373614  118459 command_runner.go:130] > # enable_nri = true
	I1008 14:43:46.373618  118459 command_runner.go:130] > # NRI socket to listen on.
	I1008 14:43:46.373624  118459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 14:43:46.373628  118459 command_runner.go:130] > # NRI plugin directory to use.
	I1008 14:43:46.373635  118459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 14:43:46.373640  118459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 14:43:46.373647  118459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 14:43:46.373653  118459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 14:43:46.373688  118459 command_runner.go:130] > # nri_disable_connections = false
	I1008 14:43:46.373696  118459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 14:43:46.373701  118459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 14:43:46.373705  118459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 14:43:46.373712  118459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 14:43:46.373717  118459 command_runner.go:130] > # NRI default validator configuration.
	I1008 14:43:46.373725  118459 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1008 14:43:46.373733  118459 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1008 14:43:46.373737  118459 command_runner.go:130] > # can be restricted/rejected:
	I1008 14:43:46.373743  118459 command_runner.go:130] > # - OCI hook injection
	I1008 14:43:46.373748  118459 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1008 14:43:46.373755  118459 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1008 14:43:46.373760  118459 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1008 14:43:46.373766  118459 command_runner.go:130] > # - adjustment of linux namespaces
	I1008 14:43:46.373772  118459 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1008 14:43:46.373780  118459 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1008 14:43:46.373788  118459 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1008 14:43:46.373791  118459 command_runner.go:130] > #
	I1008 14:43:46.373795  118459 command_runner.go:130] > # [crio.nri.default_validator]
	I1008 14:43:46.373802  118459 command_runner.go:130] > # nri_enable_default_validator = false
	I1008 14:43:46.373811  118459 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1008 14:43:46.373819  118459 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1008 14:43:46.373827  118459 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1008 14:43:46.373832  118459 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1008 14:43:46.373839  118459 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1008 14:43:46.373843  118459 command_runner.go:130] > # nri_validator_required_plugins = [
	I1008 14:43:46.373848  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373853  118459 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1008 14:43:46.373861  118459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 14:43:46.373865  118459 command_runner.go:130] > [crio.stats]
	I1008 14:43:46.373873  118459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 14:43:46.373880  118459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 14:43:46.373887  118459 command_runner.go:130] > # stats_collection_period = 0
	I1008 14:43:46.373892  118459 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1008 14:43:46.373900  118459 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1008 14:43:46.373907  118459 command_runner.go:130] > # collection_period = 0
	I1008 14:43:46.373928  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353034685Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1008 14:43:46.373938  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353062648Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1008 14:43:46.373948  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.35308236Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1008 14:43:46.373956  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353100078Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1008 14:43:46.373967  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353161884Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:46.373976  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353351718Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1008 14:43:46.373988  118459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 14:43:46.374064  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:46.374077  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:46.374093  118459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:43:46.374116  118459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:43:46.374237  118459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:43:46.374300  118459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:43:46.382363  118459 command_runner.go:130] > kubeadm
	I1008 14:43:46.382384  118459 command_runner.go:130] > kubectl
	I1008 14:43:46.382389  118459 command_runner.go:130] > kubelet
	I1008 14:43:46.382411  118459 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:43:46.382482  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:43:46.390162  118459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:43:46.403097  118459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:43:46.415613  118459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:43:46.428192  118459 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:43:46.432007  118459 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1008 14:43:46.432080  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.522533  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:46.535801  118459 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:43:46.535827  118459 certs.go:195] generating shared ca certs ...
	I1008 14:43:46.535849  118459 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:46.536002  118459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:43:46.536048  118459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:43:46.536069  118459 certs.go:257] generating profile certs ...
	I1008 14:43:46.536190  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:43:46.536242  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:43:46.536277  118459 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:43:46.536291  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:43:46.536306  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:43:46.536318  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:43:46.536330  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:43:46.536342  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:43:46.536377  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:43:46.536393  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:43:46.536405  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:43:46.536476  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:43:46.536513  118459 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:43:46.536523  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:43:46.536550  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:43:46.536574  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:43:46.536595  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:43:46.536635  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:46.536660  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.536675  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.536688  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.537241  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:43:46.555642  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:43:46.572819  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:43:46.590661  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:43:46.607931  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:43:46.625383  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:43:46.642336  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:43:46.659419  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:43:46.676486  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:43:46.693083  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:43:46.710326  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:43:46.727941  118459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:43:46.740780  118459 ssh_runner.go:195] Run: openssl version
	I1008 14:43:46.747268  118459 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1008 14:43:46.747351  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:43:46.756220  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760077  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760121  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760189  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.794493  118459 command_runner.go:130] > 3ec20f2e
	I1008 14:43:46.794726  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:43:46.803126  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:43:46.811855  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815648  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815718  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815789  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.849403  118459 command_runner.go:130] > b5213941
	I1008 14:43:46.849676  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:43:46.857958  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:43:46.866212  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869736  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869766  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869798  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.904128  118459 command_runner.go:130] > 51391683
	I1008 14:43:46.904402  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:43:46.913326  118459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917356  118459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917385  118459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 14:43:46.917396  118459 command_runner.go:130] > Device: 8,1	Inode: 591874      Links: 1
	I1008 14:43:46.917405  118459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.917413  118459 command_runner.go:130] > Access: 2025-10-08 14:39:39.676864991 +0000
	I1008 14:43:46.917418  118459 command_runner.go:130] > Modify: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917426  118459 command_runner.go:130] > Change: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917431  118459 command_runner.go:130] >  Birth: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917505  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:43:46.951955  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.952157  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:43:46.986574  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.986789  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:43:47.021180  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.021253  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:43:47.054995  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.055238  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:43:47.088666  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.089049  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:43:47.123893  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.124156  118459 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:47.124254  118459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:43:47.124313  118459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:43:47.152244  118459 cri.go:89] found id: ""
	I1008 14:43:47.152307  118459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:43:47.160274  118459 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1008 14:43:47.160294  118459 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1008 14:43:47.160299  118459 command_runner.go:130] > /var/lib/minikube/etcd:
	I1008 14:43:47.160318  118459 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:43:47.160325  118459 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:43:47.160370  118459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:43:47.167663  118459 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:43:47.167758  118459 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.167803  118459 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "functional-367186" cluster setting kubeconfig missing "functional-367186" context setting]
	I1008 14:43:47.168217  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.169051  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.169269  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.170001  118459 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:43:47.170034  118459 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:43:47.170046  118459 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:43:47.170052  118459 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:43:47.170058  118459 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:43:47.170055  118459 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:43:47.170535  118459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:43:47.177804  118459 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 14:43:47.177829  118459 kubeadm.go:601] duration metric: took 17.498385ms to restartPrimaryControlPlane
	I1008 14:43:47.177836  118459 kubeadm.go:402] duration metric: took 53.689897ms to StartCluster
	I1008 14:43:47.177851  118459 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.177960  118459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.178692  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.178964  118459 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:43:47.179000  118459 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:43:47.179182  118459 addons.go:69] Setting storage-provisioner=true in profile "functional-367186"
	I1008 14:43:47.179161  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:47.179199  118459 addons.go:238] Setting addon storage-provisioner=true in "functional-367186"
	I1008 14:43:47.179280  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.179202  118459 addons.go:69] Setting default-storageclass=true in profile "functional-367186"
	I1008 14:43:47.179355  118459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-367186"
	I1008 14:43:47.179643  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.179723  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.181696  118459 out.go:179] * Verifying Kubernetes components...
	I1008 14:43:47.182986  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:47.197887  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.198131  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.198516  118459 addons.go:238] Setting addon default-storageclass=true in "functional-367186"
	I1008 14:43:47.198560  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.198956  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.199610  118459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:43:47.201208  118459 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.201228  118459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:43:47.201280  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.224257  118459 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.224285  118459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:43:47.224346  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.226258  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.244099  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.285014  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:47.298345  118459 node_ready.go:35] waiting up to 6m0s for node "functional-367186" to be "Ready" ...
	I1008 14:43:47.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.298934  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:47.336898  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.352323  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.393808  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.393854  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.393886  118459 retry.go:31] will retry after 231.755958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407397  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.407475  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407496  118459 retry.go:31] will retry after 329.539024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.626786  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.679746  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.679800  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.679850  118459 retry.go:31] will retry after 393.16896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.738034  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.790656  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.792936  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.792970  118459 retry.go:31] will retry after 318.025551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.799129  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.799197  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.073934  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.111484  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.127850  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.127921  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.127943  118459 retry.go:31] will retry after 836.309595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.162277  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.164855  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.164886  118459 retry.go:31] will retry after 780.910281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.299211  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.299650  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.799557  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.799964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.946262  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.964996  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.998239  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.000519  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.000554  118459 retry.go:31] will retry after 896.283262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.018974  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.019036  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.019061  118459 retry.go:31] will retry after 1.078166751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.299460  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.299536  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.299868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:49.299950  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:49.799616  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.799720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.800392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:49.897595  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:49.950387  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.950427  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.950463  118459 retry.go:31] will retry after 1.484279714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.097767  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:50.149377  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:50.149421  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.149465  118459 retry.go:31] will retry after 1.600335715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.298625  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:50.798695  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.798808  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.799174  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.298904  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.435639  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:51.489347  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.491876  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.491909  118459 retry.go:31] will retry after 2.200481753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.750291  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:51.799001  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.799398  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:51.799489  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:51.803486  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.803590  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.803616  118459 retry.go:31] will retry after 2.262800355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:52.299098  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.299177  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.299542  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:52.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.799399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.799764  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.298621  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.299048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.692777  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:53.745144  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:53.745204  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.745229  118459 retry.go:31] will retry after 3.527117876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.799392  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.799480  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.799857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:53.799918  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:54.067271  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:54.118417  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:54.118478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.118503  118459 retry.go:31] will retry after 3.862999365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.298755  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.298838  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.299219  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:54.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.799074  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.298863  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.298942  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.299253  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.798989  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.799066  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.799421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:56.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:56.299793  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:56.799548  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.799947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.272978  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:57.298541  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.298620  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.298918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.321958  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:57.324558  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.324587  118459 retry.go:31] will retry after 4.383767223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.799184  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.799301  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.799689  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.982062  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:58.032702  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:58.035195  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.035237  118459 retry.go:31] will retry after 5.903970239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:58.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:58.799473  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:59.298999  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.299078  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.299479  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:59.799062  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.799145  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.299550  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.799200  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.799275  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.799625  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:00.799685  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:01.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.299385  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.299774  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:01.709356  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:01.759088  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:01.761882  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.761921  118459 retry.go:31] will retry after 6.257319935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.799124  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.799237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.299268  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.299716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.799390  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.799502  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.799880  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:02.799960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:03.299492  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.299563  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.299925  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.798665  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.798754  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.940379  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:03.990275  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:03.993084  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:03.993122  118459 retry.go:31] will retry after 4.028920288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:04.298653  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.299341  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:04.798956  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.799033  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:05.299051  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.299176  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.299598  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:05.299657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:05.799285  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.799356  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.799725  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.299393  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.299841  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.799593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.799944  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.299053  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.798714  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.798786  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.799261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:07.799325  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:08.019559  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:08.023109  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:08.072023  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.074947  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074963  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074982  118459 retry.go:31] will retry after 6.922745297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.076401  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.076428  118459 retry.go:31] will retry after 5.441570095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.298802  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.299153  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:08.799104  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.799539  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.299229  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.299310  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.299686  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.799379  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.799472  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.799807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:09.799869  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:10.299531  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.299603  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.299958  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:10.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.799011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.298647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.299123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.798895  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.799225  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:12.298842  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.298915  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:12.299310  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:12.798893  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.299008  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.518328  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:13.572977  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:13.573020  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.573038  118459 retry.go:31] will retry after 15.052611026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.798632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.798973  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.298894  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.299223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.798866  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.798962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:14.799351  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:14.998673  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:15.051035  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:15.051092  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.051116  118459 retry.go:31] will retry after 7.550335313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.299491  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.299568  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:15.799546  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.799646  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.800035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.298586  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.299006  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:17.298969  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.299043  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:17.299467  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:17.798964  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.299415  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.799349  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.799698  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:19.299431  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.299558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.299972  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:19.300047  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:19.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.299042  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.798691  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.798998  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.298572  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.298698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.299121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:21.799149  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:22.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:22.602557  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:22.653552  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:22.656108  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.656138  118459 retry.go:31] will retry after 31.201355729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.799459  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.799558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.799901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.299026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.798988  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.799061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:23.799539  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:24.299048  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.299131  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.299558  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:24.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.799285  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.799622  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.299437  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.299594  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.299994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.799056  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:26.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.298737  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.299066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:26.299138  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:26.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.799032  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.298934  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.299032  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.798977  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:28.298998  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.299130  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.299524  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:28.299599  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:28.625918  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:28.675593  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:28.678080  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.678122  118459 retry.go:31] will retry after 23.952219527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.799477  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.799570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.799970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.298589  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.298685  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.798713  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.798787  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.799221  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.298792  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.299229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.798891  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.799335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:30.799398  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:31.298936  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.299373  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:31.798930  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.799039  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.299072  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.799097  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.799529  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:32.799596  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:33.299230  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.299325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.299740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:33.798515  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.798587  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.798936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.299656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.798590  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.798664  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.799020  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:35.298588  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.298666  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.299052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:35.299143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:35.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.299007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.798626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:37.298948  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.299051  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:37.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:37.799006  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.799086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.799417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.299020  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.299100  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.299469  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.799369  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.799927  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:39.299580  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.299693  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.300082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:39.300150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:39.798611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.799046  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.298592  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.298670  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.798637  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.299138  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.798729  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.798815  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.799152  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:41.799215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:42.298723  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.298799  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.299170  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:42.798731  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.798836  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.799203  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.298908  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.299278  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.799167  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.799250  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:43.799661  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:44.299314  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.299416  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.299827  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:44.799577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.799657  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.800048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.298599  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.299047  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:46.298671  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.299126  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:46.299191  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:46.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.798850  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.799223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.299119  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.299231  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.299611  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.799336  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.799765  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:48.299501  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.299582  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.299947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:48.300006  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:48.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.798729  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.298752  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.798901  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.798982  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.298921  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.299003  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.798955  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.799416  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:50.799534  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:51.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.299214  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.299601  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:51.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.799388  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.799753  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.299413  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.299503  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.299839  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.631482  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:52.682310  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:52.684872  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.684901  118459 retry.go:31] will retry after 32.790446037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.799279  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.799368  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.799719  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:52.799778  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:53.299429  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.299873  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.799081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.858347  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:53.912029  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:53.912083  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:53.912107  118459 retry.go:31] will retry after 18.370397631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:54.298601  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:54.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.799095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:55.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.299226  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:55.299302  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:55.798903  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.798996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.298927  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.299347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:57.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.299509  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:57.299581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:57.799169  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.799283  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.299318  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.299391  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.299772  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.799563  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.799658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.800017  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.298677  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.299050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.798757  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:59.799217  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:00.298721  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.298821  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:00.798884  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.799337  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.298871  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.298949  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.299314  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.798878  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.799285  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:01.799345  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:02.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.299353  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:02.798928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.799012  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.799359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.298939  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.299014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.799249  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:03.799744  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:04.299367  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.299468  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.299800  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:04.799513  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.799614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.798722  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.799201  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:06.298786  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.298890  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.299232  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:06.299292  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:06.798807  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.798900  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.799230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.299263  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.299613  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.799343  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.799420  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.799763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:08.299428  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.299527  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.299872  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:08.299937  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:08.798593  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.798667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.799001  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.298582  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.798617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.798698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.298622  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.799101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:10.799164  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:11.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:11.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.282739  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:45:12.299378  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.299488  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.299877  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.333950  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336622  118459 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:12.799135  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.799209  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:12.799657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:13.299289  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.299709  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:13.798861  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.798943  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.298849  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.298932  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.299258  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.799040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:15.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.299098  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:15.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:15.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.799155  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.799530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.299229  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.299576  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.799320  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.799402  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.799740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.298566  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:17.799082  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:18.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.298700  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:18.798851  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.798935  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.298852  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.299298  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.798906  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.798988  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.799347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:19.799406  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:20.298933  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.299355  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:20.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.799025  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.799390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.298968  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.299041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.799011  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.799369  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:22.299008  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.299101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.299519  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:22.299580  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:22.799213  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.799289  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.299390  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.299767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.799544  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.799617  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.799951  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.298561  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.298641  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.798607  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.799048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:24.799112  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:25.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:25.476423  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:45:25.531081  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531142  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531259  118459 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:25.534376  118459 out.go:179] * Enabled addons: 
	I1008 14:45:25.535655  118459 addons.go:514] duration metric: took 1m38.356657385s for enable addons: enabled=[]
	I1008 14:45:25.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.798640  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.798959  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.298537  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.299011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.798610  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.798686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:26.799185  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:27.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.299111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:27.799210  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.799306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.799715  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.299395  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.299520  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.299905  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.798594  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:29.298630  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:29.299127  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:29.798717  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.798816  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.799196  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.299218  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.798893  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.799252  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:31.298834  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.299230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:31.299294  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:31.798829  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.798912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.799264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.298806  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.299262  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.799271  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:33.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.298966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.299345  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:33.299417  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:33.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.799654  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.299321  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.299423  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.299763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.799422  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.799533  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.799902  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.298559  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.298639  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.798592  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:35.799128  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:36.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.299156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:36.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.798779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.799148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.299530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:37.799713  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:38.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.299405  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.299766  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:38.799558  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.799667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.800040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.298689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.798644  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.799106  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:40.298658  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.299095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:40.299169  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:40.798657  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.799078  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.298629  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.798741  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.799102  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:42.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.299168  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:42.299237  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:42.798716  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.798788  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.298801  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.799130  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.799591  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:44.299252  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.299339  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.299712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:44.299773  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:44.799365  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.799825  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.299172  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.299287  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.299676  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.799167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.298781  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.298881  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.299294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.798856  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.798931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.799293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:46.799356  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:47.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.299246  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:47.799327  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.799406  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.299439  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.299542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.299919  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.798704  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:49.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:49.299162  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:49.798684  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.799141  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.298714  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.298795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.299144  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.798776  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.798853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.799207  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:51.298712  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.298791  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.299166  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:51.299231  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:51.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.798829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.799189  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.298885  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.299246  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.799319  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.298699  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.298776  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.299137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.799143  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.799505  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:53.799579  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:54.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.299276  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.299636  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:54.799331  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.799784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.299472  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.798585  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.798665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:56.298627  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:56.299148  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:56.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.799077  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.299523  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.799274  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.799642  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:58.299356  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.299473  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.299961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:58.300023  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:58.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.799059  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.298721  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.798755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.798766  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.798873  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.799228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:00.799293  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:01.298587  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.299023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:01.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.798731  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.799123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.298698  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.799202  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:03.298750  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.298833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:03.299244  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:03.799037  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.799122  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.799491  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.299167  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.299249  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.299630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.799414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.799795  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:05.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.299956  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:05.300019  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:05.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.298578  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.799117  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.299118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.299493  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.799139  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.799496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:07.799569  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:08.299035  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.299126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:08.799377  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.799812  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.298529  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.298607  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.298931  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.799111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:10.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.299130  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:10.299230  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:10.798708  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.798795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.298650  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.298984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.798571  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.798994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.299013  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.798609  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.799038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:12.799099  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:13.298602  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:13.798949  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.799028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.799365  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.299036  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.299417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.798995  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:14.799507  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:15.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:15.798739  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.299195  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.798747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.799211  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:17.299171  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.299252  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.299620  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:17.299687  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:17.799351  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.799429  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.799815  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.299581  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.299663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.300026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.798911  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.798995  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.799361  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.299017  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.798976  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.799059  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:19.799484  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:20.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.299063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.299433  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:20.799000  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.799073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.799422  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.299052  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.798986  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.799475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:21.799540  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:22.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.299073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.299421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:22.799016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.799089  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.299012  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.299086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.799352  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.799434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.799781  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:23.799842  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:24.299407  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.299843  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:24.799556  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.799961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.298635  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.298981  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.799082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:26.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.299076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:26.299150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:26.798664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.298937  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.299013  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.299343  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.798999  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:28.298903  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.298998  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.299342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:28.299409  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:28.799216  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.799293  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.299414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.299824  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.799545  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.298574  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.298654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.299010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.799063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:30.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:31.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.299084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:31.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.799089  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.298660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.798689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.798772  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.799169  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:32.799234  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:33.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:33.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.799101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.299040  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.299520  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.799151  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.799224  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.799552  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:34.799606  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:35.299196  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.299279  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:35.799293  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.799369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.799727  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.299400  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.299857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.799528  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.799601  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:36.799998  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:37.298659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.299094  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:37.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.798758  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.799112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.298715  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.298793  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.299167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.799005  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.799470  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:39.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.299482  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:39.299547  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:39.799057  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.799149  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.299162  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.299239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.299588  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.799254  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.799325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.799695  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:41.299348  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.299424  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.299798  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:41.299888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:41.799486  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.799571  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.799908  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.299014  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.798601  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.799021  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.298597  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.298675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.299015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.798718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:43.799158  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:44.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.299079  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:44.798646  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.298651  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.298724  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.798658  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:45.799190  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:46.298664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.298740  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.299081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:46.798660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.299010  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.299116  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.299468  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.799515  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:47.799577  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:48.299145  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.299237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.299586  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:48.799465  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.799540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.799893  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.299567  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.300081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.798774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.799156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:50.298747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.298852  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:50.299334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:50.798849  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.798940  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.799370  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.298974  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.299474  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.799088  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.799617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:52.299319  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.299399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.299750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:52.299815  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:52.799425  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.799532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.799968  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.298596  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.299057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.798951  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.799031  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.799358  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.298997  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.299141  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.299485  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.799052  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:54.799557  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:55.299016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.299471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:55.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.799427  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.299476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.799071  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:57.299385  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.299507  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.299911  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:57.299974  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:57.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.799954  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.298614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.298971  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.798638  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.798717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.298676  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.299184  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.798757  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.798865  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.799194  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:59.799261  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:00.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.299242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:00.798799  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.798882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.298869  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.298960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.299308  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.798868  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.798957  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:01.799395  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:02.298910  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.299004  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.299367  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:02.798967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.799471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.299109  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.799358  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.799437  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.799820  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:03.799888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:04.299467  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.299570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:04.798525  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.798605  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.798957  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.299064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:06.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.298755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.299139  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:06.299201  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:06.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.798775  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.799212  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.299173  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.299680  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.799348  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.799431  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.799818  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:08.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.299559  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.299887  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:08.299953  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:08.798622  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.298666  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.298743  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.299110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.798767  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.298823  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.299192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.799192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:10.799264  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:11.298772  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.298854  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.299193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:11.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.798887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.799274  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.298832  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.298912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.299277  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.798808  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.798896  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.799275  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:12.799334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:13.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.298906  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:13.799086  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.799171  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.799549  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.299233  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.299317  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.299685  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.799321  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.799395  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.799748  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:14.799845  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:15.299364  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.299434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.299756  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:15.799417  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.799861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.299614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.299915  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.798573  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.799007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:17.298827  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.299306  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:17.299381  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:17.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.798968  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.799302  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.298694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.799418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:19.299079  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.299153  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.299571  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:19.299630  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:19.799185  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.799262  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.799651  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.299313  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.299398  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.299801  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.800024  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:21.799168  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:22.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.298730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:22.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.798732  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.298704  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.298779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.299115  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.798943  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.799042  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:23.799509  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:24.298964  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.299040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.299390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:24.798583  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.798690  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.298624  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.299069  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.798756  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:26.298675  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:26.299192  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:26.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.799142  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.299005  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.299090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.299419  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.799045  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.799137  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.799544  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:28.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.299617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:28.299678  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:28.799473  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.799560  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.799899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.299985  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.798622  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.798983  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.298553  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.298632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.298995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.798697  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:30.799179  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:31.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.298695  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.299073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:31.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.298977  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.798588  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.798663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.799041  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:33.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:33.299097  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:33.798957  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.299095  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.299494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:35.299241  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:35.299795  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:35.799437  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.799530  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.799892  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.299548  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.798599  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.798674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.298967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.299050  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.299424  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.799403  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:37.799496  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:38.298988  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.299067  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.299408  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:38.799345  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.799481  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.799859  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.299510  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.299593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.299976  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:40.298711  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.298796  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:40.299245  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:40.798752  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.798837  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.799193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.298853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.299237  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.798946  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.799303  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:42.298889  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.298962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.299322  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:42.299384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:42.798944  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.298977  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.299047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.299368  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.799221  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.799302  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.799663  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:44.299294  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.299790  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:44.299872  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:44.799433  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.799542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.799888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.299563  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.299636  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.299993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:46.299512  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.299633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.300025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:46.300089  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:46.798790  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.798884  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.799229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.299087  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.299184  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.299563  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.798932  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.799009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.799428  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.299029  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.299106  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.299501  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.799380  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.799486  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.799833  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:48.799903  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:49.299564  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.300007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:49.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.799052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:51.298640  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.299093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:51.299156  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:51.798681  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.798761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.799132  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.298710  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.298829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.798883  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.799265  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:53.298856  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.298931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:53.299362  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:53.799190  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.799266  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.299296  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.799472  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.799553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.799952  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.298584  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.298660  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.798627  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.798713  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:55.799173  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:56.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.298834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:56.798788  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.798866  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.799242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.299122  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.299496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.799239  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.799714  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:57.799774  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:58.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.299464  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.299809  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:58.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.798672  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.799025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.298591  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.298674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.798618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.798694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.799057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:00.298633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:00.299182  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:00.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.799076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.298687  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.298762  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.299124  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.798694  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.798782  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.799125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.298730  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.298807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.299143  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:02.799242  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:03.298766  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.299191  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:03.799090  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.799168  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.799556  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.798656  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:05.298725  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.298803  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.299148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:05.299215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:05.798756  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.798859  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.298856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.299228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.799046  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.799394  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:07.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.299273  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:07.299732  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:07.799538  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.799609  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.799950  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.299147  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.799521  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:09.299345  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.299428  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.299805  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:09.299871  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:09.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.298815  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.298898  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.799063  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.799142  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.799548  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:11.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.299512  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.299861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:11.299938  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:11.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.298858  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.298934  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.298773  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.298847  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.799118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.799495  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:13.799564  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:14.299338  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.299418  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.299784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:14.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.798633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.798966  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.299111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.798836  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:16.299034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.299119  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.299472  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:16.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:16.799263  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.799716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.299984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.799093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.298690  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.298768  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.299127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.798926  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.799002  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:18.799405  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:19.298954  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.299028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.299371  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:19.798980  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.299425  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.798994  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.799140  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.799508  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:20.799581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:21.299202  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.299281  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.299656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:21.799334  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.799412  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.799779  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.299478  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.299564  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.798566  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.798990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:23.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.298653  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:23.299069  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:23.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.799024  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.298958  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.299387  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.799037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:25.299272  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.299346  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:25.299785  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:25.799564  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.799644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.800010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.298851  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.299197  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.798945  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.799020  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:27.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.299762  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:27.299828  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:27.799408  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.799498  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.799868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.299505  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.299589  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.299938  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.798710  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.799066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.298603  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.299072  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.799067  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:29.799143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:30.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.298723  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:30.798639  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.798719  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.298623  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:32.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.299071  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:32.299152  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:32.798666  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.798747  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.799135  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.298695  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.798993  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.799069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:34.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.299476  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.299807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:34.299873  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:34.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.798675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.298918  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.299259  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.799014  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.299386  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.299754  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.798548  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.798627  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:36.799056  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:37.298853  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.298929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.299261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:37.798581  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.298605  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.799034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:38.799603  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:39.299424  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.299514  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.299862  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:39.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.799092  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.298907  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.298997  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.299335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.799204  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.799649  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:40.799728  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:41.299541  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.299632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.299970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:41.798741  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.798831  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.799187  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.298986  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.299069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.299473  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.799301  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.799376  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.799728  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:42.799794  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:43.298557  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.298631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.299030  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:43.798919  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.799001  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.799377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.299220  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.299306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.299666  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.799308  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.799379  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.799750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:45.299391  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.299504  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.299837  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:45.299906  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:45.799476  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.799562  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.799953  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.298535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.298610  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.298988  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.798683  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.799014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:47.799500  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:48.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.299084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.299436  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:48.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.799397  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.799757  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.299469  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.299546  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.798748  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.799121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:50.298729  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.298811  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.299173  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:50.299238  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:50.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.798856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.799248  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.298812  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.298897  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.798948  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:52.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.299070  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:52.299545  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:52.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.799504  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.299161  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.299264  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.299675  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.799435  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.799534  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.799875  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.298718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.299112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.798929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.799294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:54.799357  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:55.299157  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.299235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.299606  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:55.799386  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.799470  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.799852  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.299065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.798779  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.798868  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.799243  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:57.299138  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.299227  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.299600  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:57.299666  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:57.799470  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.799545  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.799918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.298679  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.298761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.299149  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.799015  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.799090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:59.299293  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.299392  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.299742  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:59.299808  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:59.798577  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.299326  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.799153  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:01.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.299553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.299898  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:01.299965  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:01.798701  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.298874  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.299315  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.799145  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.799228  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.799568  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.299513  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.798557  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.799073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:03.799140  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:04.298885  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.298976  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.299401  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:04.799261  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.799710  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.299549  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.299642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.300048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.798774  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.798849  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.799206  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:05.799268  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:06.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.299053  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:06.799240  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.799328  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.799681  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.299414  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.299532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.799044  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:08.298825  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:08.299350  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:08.799137  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.799221  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.799589  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.299540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.299921  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.799064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:10.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.298925  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.299313  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:10.299380  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:10.799149  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.799223  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.799572  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.299419  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.299531  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.299928  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.798698  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.798777  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.799140  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:12.298875  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.299357  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:12.299428  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:12.799215  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.799641  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.299434  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.299538  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.299901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.798658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.798993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.298718  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.298806  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.299190  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.798984  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.799423  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:14.799511  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:15.299254  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.299343  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:15.798574  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.798655  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.298700  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.298800  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.299145  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.799300  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:17.299095  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.299193  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.299535  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:17.299597  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:17.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.799337  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.299759  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.799524  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.799598  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:19.299552  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.299638  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:19.300058  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:19.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.299002  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.798789  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.298846  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.298952  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.299301  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.799159  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.799239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.799630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:21.799697  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:22.299522  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.299619  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.299991  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:22.798758  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.798834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.799181  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.299061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.299437  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.799357  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.799433  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.799786  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:23.799850  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:24.298547  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:24.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.798835  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.799161  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.298901  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.298996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.299334  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.799154  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.799236  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.799604  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:26.299399  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.299521  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.299888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:26.299960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:26.798629  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.799035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.298805  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.298901  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.299256  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.798972  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.799378  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.299186  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.799616  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.800091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:28.800170  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:29.298943  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.299021  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.299362  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:29.799176  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.799282  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.299485  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.299566  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.299899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.798586  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:31.298771  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.299157  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:31.299210  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:31.798882  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.798989  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.299195  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.299278  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.299631  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.799405  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.799515  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.799866  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.298635  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.798843  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.798922  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.799266  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:33.799342  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:34.299019  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.299432  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:34.799270  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.799358  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.799712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.299543  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.299995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.798712  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.798807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.799171  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:36.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.298739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:36.299199  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:36.798682  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.299039  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.299475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.799319  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.799403  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.298633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.298999  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.799060  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:38.799123  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:39.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.298919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:39.799162  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.799585  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.299409  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.299508  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.299869  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.799084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:40.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:41.298831  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.298921  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:41.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.299467  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.299819  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.798568  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.798643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.798984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:43.298738  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.298822  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:43.299318  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:43.799035  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.799483  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.299382  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.299773  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.798575  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.799012  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.298748  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.298824  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.299159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.798886  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.798960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.799321  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:45.799384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:46.299022  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.299330  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:46.798742  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.798830  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.799234  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:47.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:49:47.299208  118459 node_ready.go:38] duration metric: took 6m0.000826952s for node "functional-367186" to be "Ready" ...
	I1008 14:49:47.302039  118459 out.go:203] 
	W1008 14:49:47.303804  118459 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 14:49:47.303820  118459 out.go:285] * 
	* 
	W1008 14:49:47.305511  118459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:49:47.306606  118459 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-367186 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.188356768s for "functional-367186" cluster.
I1008 14:49:47.773568   98900 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (301.882365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-840888                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-840888   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ --download-only -p download-docker-250844 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-250844 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p download-docker-250844                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-250844 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ --download-only -p binary-mirror-198013 --alsologtostderr --binary-mirror http://127.0.0.1:41765 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-198013   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p binary-mirror-198013                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-198013   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ addons  │ enable dashboard -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ addons  │ disable dashboard -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ start   │ -p addons-541206 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:26 UTC │ 08 Oct 25 14:26 UTC │
	│ start   │ -p nospam-526605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-526605 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:26 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-367186      │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ -p functional-367186 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-367186      │ jenkins │ v1.37.0 │ 08 Oct 25 14:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:43:43
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:43:43.627861  118459 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:43:43.627954  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.627958  118459 out.go:374] Setting ErrFile to fd 2...
	I1008 14:43:43.627962  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.628171  118459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:43:43.628614  118459 out.go:368] Setting JSON to false
	I1008 14:43:43.629495  118459 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8775,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:43:43.629593  118459 start.go:141] virtualization: kvm guest
	I1008 14:43:43.631500  118459 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:43:43.632767  118459 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:43:43.632773  118459 notify.go:220] Checking for updates...
	I1008 14:43:43.634937  118459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:43:43.636218  118459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:43.640666  118459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:43:43.642185  118459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:43:43.643421  118459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:43:43.644930  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:43.645039  118459 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:43:43.667985  118459 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:43:43.668119  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.723136  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.713080092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.723287  118459 docker.go:318] overlay module found
	I1008 14:43:43.725936  118459 out.go:179] * Using the docker driver based on existing profile
	I1008 14:43:43.727069  118459 start.go:305] selected driver: docker
	I1008 14:43:43.727087  118459 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.727171  118459 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:43:43.727263  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.781426  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.772365606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.782086  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:43.782179  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:43.782243  118459 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.784039  118459 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:43:43.785148  118459 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:43:43.786245  118459 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:43:43.787146  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:43.787178  118459 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:43:43.787189  118459 cache.go:58] Caching tarball of preloaded images
	I1008 14:43:43.787237  118459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:43:43.787273  118459 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:43:43.787283  118459 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:43:43.787359  118459 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:43:43.806536  118459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:43:43.806562  118459 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:43:43.806584  118459 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:43:43.806623  118459 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:43:43.806704  118459 start.go:364] duration metric: took 49.444µs to acquireMachinesLock for "functional-367186"
	I1008 14:43:43.806736  118459 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:43:43.806747  118459 fix.go:54] fixHost starting: 
	I1008 14:43:43.806975  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:43.822750  118459 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:43:43.822776  118459 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:43:43.824577  118459 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:43:43.824603  118459 machine.go:93] provisionDockerMachine start ...
	I1008 14:43:43.824673  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:43.841160  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:43.841463  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:43.841483  118459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:43:43.985591  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:43.985624  118459 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:43:43.985682  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.003073  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.003294  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.003316  118459 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:43:44.156671  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:44.156765  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.173583  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.173820  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.173845  118459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:43:44.319171  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:43:44.319200  118459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:43:44.319238  118459 ubuntu.go:190] setting up certificates
	I1008 14:43:44.319253  118459 provision.go:84] configureAuth start
	I1008 14:43:44.319306  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:44.337134  118459 provision.go:143] copyHostCerts
	I1008 14:43:44.337168  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337204  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:43:44.337226  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337295  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:43:44.337373  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337398  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:43:44.337405  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337431  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:43:44.337503  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337524  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:43:44.337531  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337557  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:43:44.337611  118459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:43:44.449681  118459 provision.go:177] copyRemoteCerts
	I1008 14:43:44.449756  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:43:44.449792  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.466984  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:44.569881  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:43:44.569953  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:43:44.587517  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:43:44.587583  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:43:44.605065  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:43:44.605124  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:43:44.622323  118459 provision.go:87] duration metric: took 303.055536ms to configureAuth
	I1008 14:43:44.622354  118459 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:43:44.622537  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:44.622644  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.639387  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.639612  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.639636  118459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:43:44.900547  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:43:44.900571  118459 machine.go:96] duration metric: took 1.07595926s to provisionDockerMachine
	I1008 14:43:44.900586  118459 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:43:44.900600  118459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:43:44.900655  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:43:44.900706  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.917783  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.020925  118459 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:43:45.024356  118459 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1008 14:43:45.024381  118459 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1008 14:43:45.024389  118459 command_runner.go:130] > VERSION_ID="12"
	I1008 14:43:45.024395  118459 command_runner.go:130] > VERSION="12 (bookworm)"
	I1008 14:43:45.024402  118459 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1008 14:43:45.024406  118459 command_runner.go:130] > ID=debian
	I1008 14:43:45.024410  118459 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1008 14:43:45.024415  118459 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1008 14:43:45.024420  118459 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1008 14:43:45.024512  118459 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:43:45.024537  118459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:43:45.024550  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:43:45.024614  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:43:45.024709  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:43:45.024722  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 14:43:45.024832  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:43:45.024842  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> /etc/test/nested/copy/98900/hosts
	I1008 14:43:45.024895  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:43:45.032438  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:45.049657  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:43:45.066943  118459 start.go:296] duration metric: took 166.34143ms for postStartSetup
	I1008 14:43:45.067016  118459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:43:45.067050  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.084921  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.184592  118459 command_runner.go:130] > 50%
	I1008 14:43:45.184676  118459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:43:45.188918  118459 command_runner.go:130] > 148G
	I1008 14:43:45.189157  118459 fix.go:56] duration metric: took 1.382403598s for fixHost
	I1008 14:43:45.189184  118459 start.go:83] releasing machines lock for "functional-367186", held for 1.382467794s
	I1008 14:43:45.189256  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:45.206786  118459 ssh_runner.go:195] Run: cat /version.json
	I1008 14:43:45.206834  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.206924  118459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:43:45.207047  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.224940  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.226308  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.323475  118459 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1008 14:43:45.323661  118459 ssh_runner.go:195] Run: systemctl --version
	I1008 14:43:45.374536  118459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 14:43:45.376350  118459 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1008 14:43:45.376387  118459 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1008 14:43:45.376484  118459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:43:45.412862  118459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:43:45.417295  118459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 14:43:45.417656  118459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:43:45.417717  118459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:43:45.425598  118459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:43:45.425618  118459 start.go:495] detecting cgroup driver to use...
	I1008 14:43:45.425645  118459 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:43:45.425686  118459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:43:45.440680  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:43:45.452844  118459 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:43:45.452899  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:43:45.466598  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:43:45.477998  118459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:43:45.564577  118459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:43:45.653273  118459 docker.go:234] disabling docker service ...
	I1008 14:43:45.653343  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:43:45.667540  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:43:45.679916  118459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:43:45.764673  118459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:43:45.852326  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:43:45.864944  118459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:43:45.878738  118459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 14:43:45.878793  118459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:43:45.878844  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.887987  118459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:43:45.888052  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.896857  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.905895  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.914639  118459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:43:45.922953  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.931880  118459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.940059  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.948635  118459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:43:45.955347  118459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 14:43:45.956050  118459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:43:45.963162  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.045488  118459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:43:46.156934  118459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:43:46.156997  118459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:43:46.161038  118459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 14:43:46.161067  118459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 14:43:46.161077  118459 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1008 14:43:46.161086  118459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.161094  118459 command_runner.go:130] > Access: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161118  118459 command_runner.go:130] > Modify: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161129  118459 command_runner.go:130] > Change: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161138  118459 command_runner.go:130] >  Birth: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161173  118459 start.go:563] Will wait 60s for crictl version
	I1008 14:43:46.161212  118459 ssh_runner.go:195] Run: which crictl
	I1008 14:43:46.164650  118459 command_runner.go:130] > /usr/local/bin/crictl
	I1008 14:43:46.164746  118459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:43:46.189255  118459 command_runner.go:130] > Version:  0.1.0
	I1008 14:43:46.189279  118459 command_runner.go:130] > RuntimeName:  cri-o
	I1008 14:43:46.189294  118459 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1008 14:43:46.189299  118459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 14:43:46.189317  118459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:43:46.189365  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.215704  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.215734  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.215741  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.215746  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.215750  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.215755  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.215762  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.215770  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.215806  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.215819  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.215825  118459 command_runner.go:130] >      static
	I1008 14:43:46.215835  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.215846  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.215857  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.215867  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.215877  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.215885  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.215897  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.215909  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.215921  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.217136  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.243203  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.243231  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.243241  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.243249  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.243256  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.243264  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.243272  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.243281  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.243293  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.243299  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.243304  118459 command_runner.go:130] >      static
	I1008 14:43:46.243312  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.243317  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.243327  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.243336  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.243348  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.243358  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.243374  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.243382  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.243390  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.246714  118459 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:43:46.248034  118459 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:43:46.264534  118459 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:43:46.268778  118459 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1008 14:43:46.268905  118459 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:43:46.269051  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:46.269113  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.298040  118459 command_runner.go:130] > {
	I1008 14:43:46.298059  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.298064  118459 command_runner.go:130] >     {
	I1008 14:43:46.298072  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.298077  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298082  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.298087  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298091  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298100  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.298109  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.298112  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298117  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.298121  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298138  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298146  118459 command_runner.go:130] >     },
	I1008 14:43:46.298151  118459 command_runner.go:130] >     {
	I1008 14:43:46.298164  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.298170  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298175  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.298181  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298185  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298191  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.298201  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.298207  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298210  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.298217  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298225  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298234  118459 command_runner.go:130] >     },
	I1008 14:43:46.298243  118459 command_runner.go:130] >     {
	I1008 14:43:46.298255  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.298262  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298267  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.298273  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298277  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298283  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.298293  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.298298  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298302  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.298309  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.298315  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298323  118459 command_runner.go:130] >     },
	I1008 14:43:46.298328  118459 command_runner.go:130] >     {
	I1008 14:43:46.298341  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.298350  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298359  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.298362  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298371  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298380  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.298387  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.298393  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298398  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.298408  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298417  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298425  118459 command_runner.go:130] >       },
	I1008 14:43:46.298438  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298461  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298467  118459 command_runner.go:130] >     },
	I1008 14:43:46.298472  118459 command_runner.go:130] >     {
	I1008 14:43:46.298481  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.298490  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298499  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.298507  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298514  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298521  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.298532  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.298540  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298548  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.298557  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298566  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298573  118459 command_runner.go:130] >       },
	I1008 14:43:46.298579  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298588  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298597  118459 command_runner.go:130] >     },
	I1008 14:43:46.298602  118459 command_runner.go:130] >     {
	I1008 14:43:46.298612  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.298619  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298628  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.298636  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298647  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298662  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.298676  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.298684  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298690  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.298699  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298705  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298713  118459 command_runner.go:130] >       },
	I1008 14:43:46.298725  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298735  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298744  118459 command_runner.go:130] >     },
	I1008 14:43:46.298752  118459 command_runner.go:130] >     {
	I1008 14:43:46.298762  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.298784  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298800  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.298808  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298815  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298829  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.298843  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.298851  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298860  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.298864  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298867  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298871  118459 command_runner.go:130] >     },
	I1008 14:43:46.298882  118459 command_runner.go:130] >     {
	I1008 14:43:46.298891  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.298895  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298899  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.298903  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298907  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298914  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.298931  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.298937  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298941  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.298948  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298952  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298957  118459 command_runner.go:130] >       },
	I1008 14:43:46.298961  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298967  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298971  118459 command_runner.go:130] >     },
	I1008 14:43:46.298978  118459 command_runner.go:130] >     {
	I1008 14:43:46.298987  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.298996  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.299004  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.299025  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299035  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.299047  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.299060  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.299068  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299074  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.299081  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.299087  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.299095  118459 command_runner.go:130] >       },
	I1008 14:43:46.299100  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.299108  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.299113  118459 command_runner.go:130] >     }
	I1008 14:43:46.299117  118459 command_runner.go:130] >   ]
	I1008 14:43:46.299125  118459 command_runner.go:130] > }
	I1008 14:43:46.300090  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.300109  118459 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:43:46.300168  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.325949  118459 command_runner.go:130] > {
	I1008 14:43:46.325970  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.325974  118459 command_runner.go:130] >     {
	I1008 14:43:46.325985  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.325990  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.325996  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.325999  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326003  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326016  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.326031  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.326040  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326047  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.326055  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326063  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326068  118459 command_runner.go:130] >     },
	I1008 14:43:46.326072  118459 command_runner.go:130] >     {
	I1008 14:43:46.326083  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.326089  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326094  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.326100  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326104  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326125  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.326136  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.326142  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326147  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.326151  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326158  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326163  118459 command_runner.go:130] >     },
	I1008 14:43:46.326166  118459 command_runner.go:130] >     {
	I1008 14:43:46.326172  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.326178  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326183  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.326188  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326192  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326201  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.326208  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.326213  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326219  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.326223  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.326226  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326229  118459 command_runner.go:130] >     },
	I1008 14:43:46.326232  118459 command_runner.go:130] >     {
	I1008 14:43:46.326238  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.326245  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326249  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.326252  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326256  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326262  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.326269  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.326275  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326279  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.326284  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326287  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326293  118459 command_runner.go:130] >       },
	I1008 14:43:46.326307  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326314  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326317  118459 command_runner.go:130] >     },
	I1008 14:43:46.326320  118459 command_runner.go:130] >     {
	I1008 14:43:46.326326  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.326331  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326335  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.326338  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326342  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326349  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.326358  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.326361  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326366  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.326369  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326373  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326378  118459 command_runner.go:130] >       },
	I1008 14:43:46.326382  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326385  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326392  118459 command_runner.go:130] >     },
	I1008 14:43:46.326395  118459 command_runner.go:130] >     {
	I1008 14:43:46.326401  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.326407  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326412  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.326415  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326419  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326429  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.326436  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.326453  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326460  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.326468  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326472  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326475  118459 command_runner.go:130] >       },
	I1008 14:43:46.326479  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326490  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326496  118459 command_runner.go:130] >     },
	I1008 14:43:46.326499  118459 command_runner.go:130] >     {
	I1008 14:43:46.326505  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.326511  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326515  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.326518  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326522  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326531  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.326538  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.326543  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326548  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.326551  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326555  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326558  118459 command_runner.go:130] >     },
	I1008 14:43:46.326561  118459 command_runner.go:130] >     {
	I1008 14:43:46.326567  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.326571  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326575  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.326578  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326582  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326588  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.326611  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.326617  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326621  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.326625  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326631  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326634  118459 command_runner.go:130] >       },
	I1008 14:43:46.326638  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326643  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326646  118459 command_runner.go:130] >     },
	I1008 14:43:46.326650  118459 command_runner.go:130] >     {
	I1008 14:43:46.326655  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.326666  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326673  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.326676  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326680  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326688  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.326698  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.326705  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326709  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.326714  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326718  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.326722  118459 command_runner.go:130] >       },
	I1008 14:43:46.326726  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326732  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.326735  118459 command_runner.go:130] >     }
	I1008 14:43:46.326738  118459 command_runner.go:130] >   ]
	I1008 14:43:46.326740  118459 command_runner.go:130] > }
	I1008 14:43:46.326842  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.326863  118459 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:43:46.326869  118459 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:43:46.326972  118459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:43:46.327030  118459 ssh_runner.go:195] Run: crio config
	I1008 14:43:46.368296  118459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 14:43:46.368332  118459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 14:43:46.368340  118459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 14:43:46.368344  118459 command_runner.go:130] > #
	I1008 14:43:46.368350  118459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 14:43:46.368356  118459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 14:43:46.368362  118459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 14:43:46.368376  118459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 14:43:46.368381  118459 command_runner.go:130] > # reload'.
	I1008 14:43:46.368392  118459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 14:43:46.368405  118459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 14:43:46.368418  118459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 14:43:46.368433  118459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 14:43:46.368458  118459 command_runner.go:130] > [crio]
	I1008 14:43:46.368472  118459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 14:43:46.368480  118459 command_runner.go:130] > # containers images, in this directory.
	I1008 14:43:46.368492  118459 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1008 14:43:46.368502  118459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 14:43:46.368514  118459 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1008 14:43:46.368525  118459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 14:43:46.368536  118459 command_runner.go:130] > # imagestore = ""
	I1008 14:43:46.368546  118459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 14:43:46.368559  118459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 14:43:46.368566  118459 command_runner.go:130] > # storage_driver = "overlay"
	I1008 14:43:46.368580  118459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 14:43:46.368587  118459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 14:43:46.368594  118459 command_runner.go:130] > # storage_option = [
	I1008 14:43:46.368599  118459 command_runner.go:130] > # ]
	I1008 14:43:46.368608  118459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 14:43:46.368621  118459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 14:43:46.368631  118459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 14:43:46.368640  118459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 14:43:46.368651  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 14:43:46.368666  118459 command_runner.go:130] > # always happen on a node reboot
	I1008 14:43:46.368678  118459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 14:43:46.368702  118459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 14:43:46.368714  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 14:43:46.368726  118459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 14:43:46.368736  118459 command_runner.go:130] > # version_file_persist = ""
	I1008 14:43:46.368751  118459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 14:43:46.368767  118459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 14:43:46.368775  118459 command_runner.go:130] > # internal_wipe = true
	I1008 14:43:46.368791  118459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 14:43:46.368802  118459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 14:43:46.368820  118459 command_runner.go:130] > # internal_repair = true
	I1008 14:43:46.368834  118459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 14:43:46.368847  118459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 14:43:46.368859  118459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 14:43:46.368869  118459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 14:43:46.368882  118459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 14:43:46.368891  118459 command_runner.go:130] > [crio.api]
	I1008 14:43:46.368900  118459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 14:43:46.368910  118459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 14:43:46.368921  118459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 14:43:46.368931  118459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 14:43:46.368942  118459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 14:43:46.368954  118459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 14:43:46.368963  118459 command_runner.go:130] > # stream_port = "0"
	I1008 14:43:46.368971  118459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 14:43:46.368981  118459 command_runner.go:130] > # stream_enable_tls = false
	I1008 14:43:46.368992  118459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 14:43:46.369002  118459 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 14:43:46.369012  118459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 14:43:46.369025  118459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369033  118459 command_runner.go:130] > # stream_tls_cert = ""
	I1008 14:43:46.369043  118459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 14:43:46.369055  118459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369075  118459 command_runner.go:130] > # stream_tls_key = ""
	I1008 14:43:46.369092  118459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 14:43:46.369106  118459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 14:43:46.369121  118459 command_runner.go:130] > # automatically pick up the changes.
	I1008 14:43:46.369130  118459 command_runner.go:130] > # stream_tls_ca = ""
	I1008 14:43:46.369153  118459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369163  118459 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1008 14:43:46.369176  118459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369186  118459 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1008 14:43:46.369197  118459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 14:43:46.369209  118459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 14:43:46.369219  118459 command_runner.go:130] > [crio.runtime]
	I1008 14:43:46.369229  118459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 14:43:46.369240  118459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 14:43:46.369246  118459 command_runner.go:130] > # "nofile=1024:2048"
	I1008 14:43:46.369260  118459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 14:43:46.369269  118459 command_runner.go:130] > # default_ulimits = [
	I1008 14:43:46.369275  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369288  118459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 14:43:46.369296  118459 command_runner.go:130] > # no_pivot = false
	I1008 14:43:46.369305  118459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 14:43:46.369317  118459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 14:43:46.369327  118459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 14:43:46.369338  118459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 14:43:46.369348  118459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 14:43:46.369359  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369368  118459 command_runner.go:130] > # conmon = ""
	I1008 14:43:46.369375  118459 command_runner.go:130] > # Cgroup setting for conmon
	I1008 14:43:46.369386  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 14:43:46.369393  118459 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 14:43:46.369402  118459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 14:43:46.369410  118459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 14:43:46.369421  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369430  118459 command_runner.go:130] > # conmon_env = [
	I1008 14:43:46.369435  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369456  118459 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 14:43:46.369465  118459 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 14:43:46.369475  118459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 14:43:46.369484  118459 command_runner.go:130] > # default_env = [
	I1008 14:43:46.369489  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369498  118459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 14:43:46.369516  118459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 14:43:46.369528  118459 command_runner.go:130] > # selinux = false
	I1008 14:43:46.369539  118459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 14:43:46.369555  118459 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1008 14:43:46.369564  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369570  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.369582  118459 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1008 14:43:46.369602  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369609  118459 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1008 14:43:46.369619  118459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 14:43:46.369631  118459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 14:43:46.369644  118459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 14:43:46.369653  118459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 14:43:46.369661  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369672  118459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 14:43:46.369680  118459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 14:43:46.369690  118459 command_runner.go:130] > # the cgroup blockio controller.
	I1008 14:43:46.369697  118459 command_runner.go:130] > # blockio_config_file = ""
	I1008 14:43:46.369709  118459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 14:43:46.369718  118459 command_runner.go:130] > # blockio parameters.
	I1008 14:43:46.369724  118459 command_runner.go:130] > # blockio_reload = false
	I1008 14:43:46.369735  118459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 14:43:46.369744  118459 command_runner.go:130] > # irqbalance daemon.
	I1008 14:43:46.369857  118459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 14:43:46.369873  118459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 14:43:46.369884  118459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 14:43:46.369898  118459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 14:43:46.369909  118459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 14:43:46.369924  118459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 14:43:46.369934  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369943  118459 command_runner.go:130] > # rdt_config_file = ""
	I1008 14:43:46.369950  118459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 14:43:46.369959  118459 command_runner.go:130] > # cgroup_manager = "systemd"
	I1008 14:43:46.369968  118459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 14:43:46.369979  118459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 14:43:46.369989  118459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 14:43:46.370002  118459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 14:43:46.370011  118459 command_runner.go:130] > # will be added.
	I1008 14:43:46.370027  118459 command_runner.go:130] > # default_capabilities = [
	I1008 14:43:46.370036  118459 command_runner.go:130] > # 	"CHOWN",
	I1008 14:43:46.370044  118459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 14:43:46.370051  118459 command_runner.go:130] > # 	"FSETID",
	I1008 14:43:46.370054  118459 command_runner.go:130] > # 	"FOWNER",
	I1008 14:43:46.370062  118459 command_runner.go:130] > # 	"SETGID",
	I1008 14:43:46.370083  118459 command_runner.go:130] > # 	"SETUID",
	I1008 14:43:46.370093  118459 command_runner.go:130] > # 	"SETPCAP",
	I1008 14:43:46.370099  118459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 14:43:46.370108  118459 command_runner.go:130] > # 	"KILL",
	I1008 14:43:46.370113  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370127  118459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 14:43:46.370140  118459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 14:43:46.370152  118459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 14:43:46.370164  118459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 14:43:46.370173  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370183  118459 command_runner.go:130] > default_sysctls = [
	I1008 14:43:46.370193  118459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 14:43:46.370198  118459 command_runner.go:130] > ]
	I1008 14:43:46.370209  118459 command_runner.go:130] > # List of devices on the host that a
	I1008 14:43:46.370249  118459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 14:43:46.370259  118459 command_runner.go:130] > # allowed_devices = [
	I1008 14:43:46.370266  118459 command_runner.go:130] > # 	"/dev/fuse",
	I1008 14:43:46.370270  118459 command_runner.go:130] > # 	"/dev/net/tun",
	I1008 14:43:46.370277  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370285  118459 command_runner.go:130] > # List of additional devices. specified as
	I1008 14:43:46.370300  118459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 14:43:46.370312  118459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 14:43:46.370324  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370333  118459 command_runner.go:130] > # additional_devices = [
	I1008 14:43:46.370341  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370351  118459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 14:43:46.370360  118459 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 14:43:46.370366  118459 command_runner.go:130] > # 	"/etc/cdi",
	I1008 14:43:46.370370  118459 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 14:43:46.370378  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370387  118459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 14:43:46.370400  118459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 14:43:46.370411  118459 command_runner.go:130] > # Defaults to false.
	I1008 14:43:46.370422  118459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 14:43:46.370434  118459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 14:43:46.370462  118459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 14:43:46.370470  118459 command_runner.go:130] > # hooks_dir = [
	I1008 14:43:46.370481  118459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 14:43:46.370491  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370503  118459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 14:43:46.370515  118459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 14:43:46.370526  118459 command_runner.go:130] > # its default mounts from the following two files:
	I1008 14:43:46.370532  118459 command_runner.go:130] > #
	I1008 14:43:46.370538  118459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 14:43:46.370550  118459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 14:43:46.370562  118459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 14:43:46.370571  118459 command_runner.go:130] > #
	I1008 14:43:46.370580  118459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 14:43:46.370593  118459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 14:43:46.370605  118459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 14:43:46.370615  118459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 14:43:46.370623  118459 command_runner.go:130] > #
	I1008 14:43:46.370629  118459 command_runner.go:130] > # default_mounts_file = ""
	I1008 14:43:46.370637  118459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 14:43:46.370647  118459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 14:43:46.370657  118459 command_runner.go:130] > # pids_limit = -1
	I1008 14:43:46.370667  118459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 14:43:46.370679  118459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 14:43:46.370693  118459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 14:43:46.370708  118459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 14:43:46.370717  118459 command_runner.go:130] > # log_size_max = -1
	I1008 14:43:46.370728  118459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 14:43:46.370735  118459 command_runner.go:130] > # log_to_journald = false
	I1008 14:43:46.370743  118459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 14:43:46.370755  118459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 14:43:46.370763  118459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 14:43:46.370774  118459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 14:43:46.370785  118459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 14:43:46.370794  118459 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 14:43:46.370804  118459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 14:43:46.370819  118459 command_runner.go:130] > # read_only = false
	I1008 14:43:46.370828  118459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 14:43:46.370841  118459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 14:43:46.370850  118459 command_runner.go:130] > # live configuration reload.
	I1008 14:43:46.370856  118459 command_runner.go:130] > # log_level = "info"
	I1008 14:43:46.370868  118459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 14:43:46.370884  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.370893  118459 command_runner.go:130] > # log_filter = ""
	I1008 14:43:46.370905  118459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370917  118459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 14:43:46.370923  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370934  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.370943  118459 command_runner.go:130] > # uid_mappings = ""
	I1008 14:43:46.370955  118459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370967  118459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 14:43:46.370979  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370994  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371003  118459 command_runner.go:130] > # gid_mappings = ""
	I1008 14:43:46.371012  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 14:43:46.371023  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371037  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371055  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371064  118459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 14:43:46.371076  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 14:43:46.371087  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371100  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371112  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371122  118459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 14:43:46.371134  118459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 14:43:46.371146  118459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 14:43:46.371158  118459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 14:43:46.371168  118459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 14:43:46.371179  118459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 14:43:46.371188  118459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 14:43:46.371193  118459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 14:43:46.371204  118459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 14:43:46.371214  118459 command_runner.go:130] > # drop_infra_ctr = true
	I1008 14:43:46.371224  118459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 14:43:46.371235  118459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 14:43:46.371249  118459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 14:43:46.371258  118459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 14:43:46.371276  118459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 14:43:46.371285  118459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 14:43:46.371294  118459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 14:43:46.371306  118459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 14:43:46.371316  118459 command_runner.go:130] > # shared_cpuset = ""
	I1008 14:43:46.371326  118459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 14:43:46.371337  118459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 14:43:46.371346  118459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 14:43:46.371358  118459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 14:43:46.371366  118459 command_runner.go:130] > # pinns_path = ""
	I1008 14:43:46.371374  118459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 14:43:46.371385  118459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 14:43:46.371395  118459 command_runner.go:130] > # enable_criu_support = true
	I1008 14:43:46.371405  118459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 14:43:46.371417  118459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 14:43:46.371422  118459 command_runner.go:130] > # enable_pod_events = false
	I1008 14:43:46.371434  118459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 14:43:46.371453  118459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 14:43:46.371465  118459 command_runner.go:130] > # default_runtime = "crun"
	I1008 14:43:46.371473  118459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 14:43:46.371484  118459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 14:43:46.371501  118459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 14:43:46.371511  118459 command_runner.go:130] > # creation as a file is not desired either.
	I1008 14:43:46.371526  118459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 14:43:46.371537  118459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 14:43:46.371545  118459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 14:43:46.371552  118459 command_runner.go:130] > # ]
	I1008 14:43:46.371559  118459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 14:43:46.371568  118459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 14:43:46.371574  118459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 14:43:46.371579  118459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 14:43:46.371584  118459 command_runner.go:130] > #
	I1008 14:43:46.371589  118459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 14:43:46.371595  118459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 14:43:46.371599  118459 command_runner.go:130] > # runtime_type = "oci"
	I1008 14:43:46.371606  118459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 14:43:46.371610  118459 command_runner.go:130] > # inherit_default_runtime = false
	I1008 14:43:46.371621  118459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 14:43:46.371628  118459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 14:43:46.371633  118459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 14:43:46.371639  118459 command_runner.go:130] > # monitor_env = []
	I1008 14:43:46.371643  118459 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 14:43:46.371649  118459 command_runner.go:130] > # allowed_annotations = []
	I1008 14:43:46.371654  118459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 14:43:46.371660  118459 command_runner.go:130] > # no_sync_log = false
	I1008 14:43:46.371664  118459 command_runner.go:130] > # default_annotations = {}
	I1008 14:43:46.371672  118459 command_runner.go:130] > # stream_websockets = false
	I1008 14:43:46.371676  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.371698  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.371705  118459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 14:43:46.371711  118459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 14:43:46.371719  118459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 14:43:46.371727  118459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 14:43:46.371731  118459 command_runner.go:130] > #   in $PATH.
	I1008 14:43:46.371736  118459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 14:43:46.371743  118459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 14:43:46.371748  118459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 14:43:46.371753  118459 command_runner.go:130] > #   state.
	I1008 14:43:46.371759  118459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 14:43:46.371767  118459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 14:43:46.371772  118459 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1008 14:43:46.371780  118459 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1008 14:43:46.371785  118459 command_runner.go:130] > #   the values from the default runtime on load time.
	I1008 14:43:46.371793  118459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 14:43:46.371801  118459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 14:43:46.371819  118459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 14:43:46.371827  118459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 14:43:46.371832  118459 command_runner.go:130] > #   The currently recognized values are:
	I1008 14:43:46.371840  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 14:43:46.371846  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 14:43:46.371854  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 14:43:46.371859  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 14:43:46.371869  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 14:43:46.371877  118459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 14:43:46.371885  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 14:43:46.371894  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 14:43:46.371900  118459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 14:43:46.371908  118459 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1008 14:43:46.371917  118459 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1008 14:43:46.371926  118459 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1008 14:43:46.371937  118459 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1008 14:43:46.371943  118459 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1008 14:43:46.371951  118459 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1008 14:43:46.371958  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1008 14:43:46.371966  118459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 14:43:46.371973  118459 command_runner.go:130] > #   deprecated option "conmon".
	I1008 14:43:46.371980  118459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 14:43:46.371987  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 14:43:46.371993  118459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 14:43:46.372000  118459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 14:43:46.372006  118459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1008 14:43:46.372013  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 14:43:46.372019  118459 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1008 14:43:46.372025  118459 command_runner.go:130] > #   conmon-rs by using:
	I1008 14:43:46.372032  118459 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1008 14:43:46.372041  118459 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1008 14:43:46.372050  118459 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1008 14:43:46.372060  118459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 14:43:46.372067  118459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 14:43:46.372073  118459 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1008 14:43:46.372083  118459 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1008 14:43:46.372090  118459 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1008 14:43:46.372097  118459 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1008 14:43:46.372107  118459 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1008 14:43:46.372116  118459 command_runner.go:130] > #   when a machine crash happens.
	I1008 14:43:46.372125  118459 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1008 14:43:46.372132  118459 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1008 14:43:46.372139  118459 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1008 14:43:46.372145  118459 command_runner.go:130] > #   seccomp profile for the runtime.
	I1008 14:43:46.372151  118459 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1008 14:43:46.372160  118459 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1008 14:43:46.372165  118459 command_runner.go:130] > #
	I1008 14:43:46.372170  118459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 14:43:46.372175  118459 command_runner.go:130] > #
	I1008 14:43:46.372181  118459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 14:43:46.372187  118459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 14:43:46.372192  118459 command_runner.go:130] > #
	I1008 14:43:46.372198  118459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 14:43:46.372205  118459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 14:43:46.372208  118459 command_runner.go:130] > #
	I1008 14:43:46.372214  118459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 14:43:46.372219  118459 command_runner.go:130] > # feature.
	I1008 14:43:46.372222  118459 command_runner.go:130] > #
	I1008 14:43:46.372228  118459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 14:43:46.372235  118459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 14:43:46.372242  118459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 14:43:46.372251  118459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 14:43:46.372259  118459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 14:43:46.372261  118459 command_runner.go:130] > #
	I1008 14:43:46.372267  118459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 14:43:46.372275  118459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 14:43:46.372281  118459 command_runner.go:130] > #
	I1008 14:43:46.372286  118459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 14:43:46.372294  118459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 14:43:46.372297  118459 command_runner.go:130] > #
	I1008 14:43:46.372302  118459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 14:43:46.372310  118459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 14:43:46.372314  118459 command_runner.go:130] > # limitation.
	I1008 14:43:46.372320  118459 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1008 14:43:46.372325  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1008 14:43:46.372330  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372334  118459 command_runner.go:130] > runtime_root = "/run/crun"
	I1008 14:43:46.372343  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372349  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372353  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372358  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372363  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372367  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372374  118459 command_runner.go:130] > allowed_annotations = [
	I1008 14:43:46.372380  118459 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1008 14:43:46.372384  118459 command_runner.go:130] > ]
	I1008 14:43:46.372391  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372395  118459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 14:43:46.372402  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1008 14:43:46.372406  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372411  118459 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 14:43:46.372415  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372422  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372425  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372432  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372436  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372453  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372461  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372473  118459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 14:43:46.372482  118459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 14:43:46.372491  118459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 14:43:46.372498  118459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 14:43:46.372509  118459 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1008 14:43:46.372520  118459 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1008 14:43:46.372530  118459 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1008 14:43:46.372537  118459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 14:43:46.372545  118459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 14:43:46.372555  118459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 14:43:46.372562  118459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 14:43:46.372569  118459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 14:43:46.372574  118459 command_runner.go:130] > # Example:
	I1008 14:43:46.372578  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 14:43:46.372585  118459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 14:43:46.372591  118459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 14:43:46.372602  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 14:43:46.372608  118459 command_runner.go:130] > # cpuset = "0-1"
	I1008 14:43:46.372612  118459 command_runner.go:130] > # cpushares = "5"
	I1008 14:43:46.372617  118459 command_runner.go:130] > # cpuquota = "1000"
	I1008 14:43:46.372621  118459 command_runner.go:130] > # cpuperiod = "100000"
	I1008 14:43:46.372626  118459 command_runner.go:130] > # cpulimit = "35"
	I1008 14:43:46.372630  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.372634  118459 command_runner.go:130] > # The workload name is workload-type.
	I1008 14:43:46.372643  118459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 14:43:46.372650  118459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 14:43:46.372655  118459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 14:43:46.372665  118459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 14:43:46.372682  118459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 14:43:46.372689  118459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 14:43:46.372695  118459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 14:43:46.372701  118459 command_runner.go:130] > # Default value is set to true
	I1008 14:43:46.372706  118459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 14:43:46.372713  118459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 14:43:46.372717  118459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 14:43:46.372724  118459 command_runner.go:130] > # Default value is set to 'false'
	I1008 14:43:46.372728  118459 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 14:43:46.372735  118459 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1008 14:43:46.372743  118459 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1008 14:43:46.372748  118459 command_runner.go:130] > # timezone = ""
	I1008 14:43:46.372756  118459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 14:43:46.372761  118459 command_runner.go:130] > #
	I1008 14:43:46.372767  118459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 14:43:46.372775  118459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1008 14:43:46.372781  118459 command_runner.go:130] > [crio.image]
	I1008 14:43:46.372786  118459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 14:43:46.372792  118459 command_runner.go:130] > # default_transport = "docker://"
	I1008 14:43:46.372798  118459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 14:43:46.372822  118459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372828  118459 command_runner.go:130] > # global_auth_file = ""
	I1008 14:43:46.372833  118459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 14:43:46.372840  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372844  118459 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.372853  118459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 14:43:46.372861  118459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372871  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372877  118459 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 14:43:46.372883  118459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 14:43:46.372888  118459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 14:43:46.372896  118459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 14:43:46.372902  118459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 14:43:46.372908  118459 command_runner.go:130] > # pause_command = "/pause"
	I1008 14:43:46.372914  118459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 14:43:46.372922  118459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 14:43:46.372927  118459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 14:43:46.372935  118459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 14:43:46.372940  118459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 14:43:46.372948  118459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 14:43:46.372952  118459 command_runner.go:130] > # pinned_images = [
	I1008 14:43:46.372958  118459 command_runner.go:130] > # ]
	I1008 14:43:46.372963  118459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 14:43:46.372972  118459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 14:43:46.372978  118459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 14:43:46.372986  118459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 14:43:46.372991  118459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 14:43:46.372997  118459 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1008 14:43:46.373003  118459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 14:43:46.373012  118459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 14:43:46.373021  118459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 14:43:46.373029  118459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 14:43:46.373034  118459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 14:43:46.373042  118459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 14:43:46.373051  118459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 14:43:46.373058  118459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 14:43:46.373065  118459 command_runner.go:130] > # changing them here.
	I1008 14:43:46.373070  118459 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1008 14:43:46.373076  118459 command_runner.go:130] > # insecure_registries = [
	I1008 14:43:46.373079  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373087  118459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 14:43:46.373095  118459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 14:43:46.373104  118459 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 14:43:46.373112  118459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 14:43:46.373116  118459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 14:43:46.373124  118459 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1008 14:43:46.373130  118459 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1008 14:43:46.373134  118459 command_runner.go:130] > # auto_reload_registries = false
	I1008 14:43:46.373142  118459 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1008 14:43:46.373149  118459 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1008 14:43:46.373157  118459 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1008 14:43:46.373162  118459 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1008 14:43:46.373168  118459 command_runner.go:130] > # The mode of short name resolution.
	I1008 14:43:46.373174  118459 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1008 14:43:46.373183  118459 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1008 14:43:46.373190  118459 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1008 14:43:46.373195  118459 command_runner.go:130] > # short_name_mode = "enforcing"
	I1008 14:43:46.373204  118459 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1008 14:43:46.373212  118459 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1008 14:43:46.373216  118459 command_runner.go:130] > # oci_artifact_mount_support = true
	I1008 14:43:46.373224  118459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 14:43:46.373228  118459 command_runner.go:130] > # CNI plugins.
	I1008 14:43:46.373234  118459 command_runner.go:130] > [crio.network]
	I1008 14:43:46.373239  118459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 14:43:46.373246  118459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 14:43:46.373251  118459 command_runner.go:130] > # cni_default_network = ""
	I1008 14:43:46.373259  118459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 14:43:46.373266  118459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 14:43:46.373271  118459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 14:43:46.373277  118459 command_runner.go:130] > # plugin_dirs = [
	I1008 14:43:46.373280  118459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 14:43:46.373284  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373289  118459 command_runner.go:130] > # List of included pod metrics.
	I1008 14:43:46.373295  118459 command_runner.go:130] > # included_pod_metrics = [
	I1008 14:43:46.373297  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373304  118459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 14:43:46.373310  118459 command_runner.go:130] > [crio.metrics]
	I1008 14:43:46.373314  118459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 14:43:46.373320  118459 command_runner.go:130] > # enable_metrics = false
	I1008 14:43:46.373324  118459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 14:43:46.373331  118459 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 14:43:46.373337  118459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 14:43:46.373347  118459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 14:43:46.373355  118459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 14:43:46.373359  118459 command_runner.go:130] > # metrics_collectors = [
	I1008 14:43:46.373364  118459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 14:43:46.373368  118459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 14:43:46.373371  118459 command_runner.go:130] > # 	"containers_oom_total",
	I1008 14:43:46.373374  118459 command_runner.go:130] > # 	"processes_defunct",
	I1008 14:43:46.373378  118459 command_runner.go:130] > # 	"operations_total",
	I1008 14:43:46.373381  118459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 14:43:46.373386  118459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 14:43:46.373389  118459 command_runner.go:130] > # 	"operations_errors_total",
	I1008 14:43:46.373393  118459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 14:43:46.373397  118459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 14:43:46.373400  118459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 14:43:46.373408  118459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 14:43:46.373411  118459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 14:43:46.373415  118459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 14:43:46.373422  118459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 14:43:46.373426  118459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 14:43:46.373430  118459 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1008 14:43:46.373436  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373450  118459 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1008 14:43:46.373460  118459 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1008 14:43:46.373468  118459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 14:43:46.373475  118459 command_runner.go:130] > # metrics_port = 9090
	I1008 14:43:46.373480  118459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 14:43:46.373486  118459 command_runner.go:130] > # metrics_socket = ""
	I1008 14:43:46.373490  118459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 14:43:46.373499  118459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 14:43:46.373508  118459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 14:43:46.373514  118459 command_runner.go:130] > # certificate on any modification event.
	I1008 14:43:46.373518  118459 command_runner.go:130] > # metrics_cert = ""
	I1008 14:43:46.373525  118459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 14:43:46.373530  118459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 14:43:46.373536  118459 command_runner.go:130] > # metrics_key = ""
	I1008 14:43:46.373542  118459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 14:43:46.373548  118459 command_runner.go:130] > [crio.tracing]
	I1008 14:43:46.373554  118459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 14:43:46.373564  118459 command_runner.go:130] > # enable_tracing = false
	I1008 14:43:46.373571  118459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 14:43:46.373576  118459 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1008 14:43:46.373584  118459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 14:43:46.373591  118459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 14:43:46.373598  118459 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 14:43:46.373604  118459 command_runner.go:130] > [crio.nri]
	I1008 14:43:46.373608  118459 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 14:43:46.373614  118459 command_runner.go:130] > # enable_nri = true
	I1008 14:43:46.373618  118459 command_runner.go:130] > # NRI socket to listen on.
	I1008 14:43:46.373624  118459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 14:43:46.373628  118459 command_runner.go:130] > # NRI plugin directory to use.
	I1008 14:43:46.373635  118459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 14:43:46.373640  118459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 14:43:46.373647  118459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 14:43:46.373653  118459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 14:43:46.373688  118459 command_runner.go:130] > # nri_disable_connections = false
	I1008 14:43:46.373696  118459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 14:43:46.373701  118459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 14:43:46.373705  118459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 14:43:46.373712  118459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 14:43:46.373717  118459 command_runner.go:130] > # NRI default validator configuration.
	I1008 14:43:46.373725  118459 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1008 14:43:46.373733  118459 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1008 14:43:46.373737  118459 command_runner.go:130] > # can be restricted/rejected:
	I1008 14:43:46.373743  118459 command_runner.go:130] > # - OCI hook injection
	I1008 14:43:46.373748  118459 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1008 14:43:46.373755  118459 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1008 14:43:46.373760  118459 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1008 14:43:46.373766  118459 command_runner.go:130] > # - adjustment of linux namespaces
	I1008 14:43:46.373772  118459 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1008 14:43:46.373780  118459 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1008 14:43:46.373788  118459 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1008 14:43:46.373791  118459 command_runner.go:130] > #
	I1008 14:43:46.373795  118459 command_runner.go:130] > # [crio.nri.default_validator]
	I1008 14:43:46.373802  118459 command_runner.go:130] > # nri_enable_default_validator = false
	I1008 14:43:46.373811  118459 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1008 14:43:46.373819  118459 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1008 14:43:46.373827  118459 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1008 14:43:46.373832  118459 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1008 14:43:46.373839  118459 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1008 14:43:46.373843  118459 command_runner.go:130] > # nri_validator_required_plugins = [
	I1008 14:43:46.373848  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373853  118459 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1008 14:43:46.373861  118459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 14:43:46.373865  118459 command_runner.go:130] > [crio.stats]
	I1008 14:43:46.373873  118459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 14:43:46.373880  118459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 14:43:46.373887  118459 command_runner.go:130] > # stats_collection_period = 0
	I1008 14:43:46.373892  118459 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1008 14:43:46.373900  118459 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1008 14:43:46.373907  118459 command_runner.go:130] > # collection_period = 0
	I1008 14:43:46.373928  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353034685Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1008 14:43:46.373938  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353062648Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1008 14:43:46.373948  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.35308236Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1008 14:43:46.373956  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353100078Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1008 14:43:46.373967  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353161884Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:46.373976  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353351718Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1008 14:43:46.373988  118459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 14:43:46.374064  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:46.374077  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:46.374093  118459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:43:46.374116  118459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:43:46.374237  118459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:43:46.374300  118459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:43:46.382363  118459 command_runner.go:130] > kubeadm
	I1008 14:43:46.382384  118459 command_runner.go:130] > kubectl
	I1008 14:43:46.382389  118459 command_runner.go:130] > kubelet
	I1008 14:43:46.382411  118459 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:43:46.382482  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:43:46.390162  118459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:43:46.403097  118459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:43:46.415613  118459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:43:46.428192  118459 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:43:46.432007  118459 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1008 14:43:46.432080  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.522533  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:46.535801  118459 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:43:46.535827  118459 certs.go:195] generating shared ca certs ...
	I1008 14:43:46.535849  118459 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:46.536002  118459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:43:46.536048  118459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:43:46.536069  118459 certs.go:257] generating profile certs ...
	I1008 14:43:46.536190  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:43:46.536242  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:43:46.536277  118459 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:43:46.536291  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:43:46.536306  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:43:46.536318  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:43:46.536330  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:43:46.536342  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:43:46.536377  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:43:46.536393  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:43:46.536405  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:43:46.536476  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:43:46.536513  118459 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:43:46.536523  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:43:46.536550  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:43:46.536574  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:43:46.536595  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:43:46.536635  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:46.536660  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.536675  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.536688  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.537241  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:43:46.555642  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:43:46.572819  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:43:46.590661  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:43:46.607931  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:43:46.625383  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:43:46.642336  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:43:46.659419  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:43:46.676486  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:43:46.693083  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:43:46.710326  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:43:46.727941  118459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:43:46.740780  118459 ssh_runner.go:195] Run: openssl version
	I1008 14:43:46.747268  118459 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1008 14:43:46.747351  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:43:46.756220  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760077  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760121  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760189  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.794493  118459 command_runner.go:130] > 3ec20f2e
	I1008 14:43:46.794726  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:43:46.803126  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:43:46.811855  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815648  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815718  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815789  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.849403  118459 command_runner.go:130] > b5213941
	I1008 14:43:46.849676  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:43:46.857958  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:43:46.866212  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869736  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869766  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869798  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.904128  118459 command_runner.go:130] > 51391683
	I1008 14:43:46.904402  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:43:46.913326  118459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917356  118459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917385  118459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 14:43:46.917396  118459 command_runner.go:130] > Device: 8,1	Inode: 591874      Links: 1
	I1008 14:43:46.917405  118459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.917413  118459 command_runner.go:130] > Access: 2025-10-08 14:39:39.676864991 +0000
	I1008 14:43:46.917418  118459 command_runner.go:130] > Modify: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917426  118459 command_runner.go:130] > Change: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917431  118459 command_runner.go:130] >  Birth: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917505  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:43:46.951955  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.952157  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:43:46.986574  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.986789  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:43:47.021180  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.021253  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:43:47.054995  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.055238  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:43:47.088666  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.089049  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:43:47.123893  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.124156  118459 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:47.124254  118459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:43:47.124313  118459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:43:47.152244  118459 cri.go:89] found id: ""
	I1008 14:43:47.152307  118459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:43:47.160274  118459 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1008 14:43:47.160294  118459 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1008 14:43:47.160299  118459 command_runner.go:130] > /var/lib/minikube/etcd:
	I1008 14:43:47.160318  118459 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:43:47.160325  118459 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:43:47.160370  118459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:43:47.167663  118459 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:43:47.167758  118459 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.167803  118459 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "functional-367186" cluster setting kubeconfig missing "functional-367186" context setting]
	I1008 14:43:47.168217  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.169051  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.169269  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.170001  118459 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:43:47.170034  118459 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:43:47.170046  118459 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:43:47.170052  118459 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:43:47.170058  118459 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:43:47.170055  118459 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:43:47.170535  118459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:43:47.177804  118459 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 14:43:47.177829  118459 kubeadm.go:601] duration metric: took 17.498385ms to restartPrimaryControlPlane
	I1008 14:43:47.177836  118459 kubeadm.go:402] duration metric: took 53.689897ms to StartCluster
	I1008 14:43:47.177851  118459 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.177960  118459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.178692  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.178964  118459 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:43:47.179000  118459 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:43:47.179182  118459 addons.go:69] Setting storage-provisioner=true in profile "functional-367186"
	I1008 14:43:47.179161  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:47.179199  118459 addons.go:238] Setting addon storage-provisioner=true in "functional-367186"
	I1008 14:43:47.179280  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.179202  118459 addons.go:69] Setting default-storageclass=true in profile "functional-367186"
	I1008 14:43:47.179355  118459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-367186"
	I1008 14:43:47.179643  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.179723  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.181696  118459 out.go:179] * Verifying Kubernetes components...
	I1008 14:43:47.182986  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:47.197887  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.198131  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.198516  118459 addons.go:238] Setting addon default-storageclass=true in "functional-367186"
	I1008 14:43:47.198560  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.198956  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.199610  118459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:43:47.201208  118459 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.201228  118459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:43:47.201280  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.224257  118459 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.224285  118459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:43:47.224346  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.226258  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.244099  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.285014  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:47.298345  118459 node_ready.go:35] waiting up to 6m0s for node "functional-367186" to be "Ready" ...
	I1008 14:43:47.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.298934  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:47.336898  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.352323  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.393808  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.393854  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.393886  118459 retry.go:31] will retry after 231.755958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407397  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.407475  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407496  118459 retry.go:31] will retry after 329.539024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.626786  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.679746  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.679800  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.679850  118459 retry.go:31] will retry after 393.16896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.738034  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.790656  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.792936  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.792970  118459 retry.go:31] will retry after 318.025551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.799129  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.799197  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.073934  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.111484  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.127850  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.127921  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.127943  118459 retry.go:31] will retry after 836.309595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.162277  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.164855  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.164886  118459 retry.go:31] will retry after 780.910281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.299211  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.299650  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.799557  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.799964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.946262  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.964996  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.998239  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.000519  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.000554  118459 retry.go:31] will retry after 896.283262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.018974  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.019036  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.019061  118459 retry.go:31] will retry after 1.078166751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.299460  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.299536  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.299868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:49.299950  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:49.799616  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.799720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.800392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:49.897595  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:49.950387  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.950427  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.950463  118459 retry.go:31] will retry after 1.484279714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.097767  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:50.149377  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:50.149421  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.149465  118459 retry.go:31] will retry after 1.600335715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.298625  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:50.798695  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.798808  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.799174  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.298904  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.435639  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:51.489347  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.491876  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.491909  118459 retry.go:31] will retry after 2.200481753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.750291  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:51.799001  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.799398  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:51.799489  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:51.803486  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.803590  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.803616  118459 retry.go:31] will retry after 2.262800355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:52.299098  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.299177  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.299542  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:52.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.799399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.799764  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.298621  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.299048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.692777  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:53.745144  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:53.745204  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.745229  118459 retry.go:31] will retry after 3.527117876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.799392  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.799480  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.799857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:53.799918  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:54.067271  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:54.118417  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:54.118478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.118503  118459 retry.go:31] will retry after 3.862999365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.298755  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.298838  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.299219  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:54.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.799074  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.298863  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.298942  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.299253  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.798989  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.799066  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.799421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:56.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:56.299793  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:56.799548  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.799947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.272978  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:57.298541  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.298620  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.298918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.321958  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:57.324558  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.324587  118459 retry.go:31] will retry after 4.383767223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.799184  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.799301  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.799689  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.982062  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:58.032702  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:58.035195  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.035237  118459 retry.go:31] will retry after 5.903970239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:58.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:58.799473  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:59.298999  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.299078  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.299479  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:59.799062  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.799145  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.299550  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.799200  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.799275  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.799625  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:00.799685  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:01.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.299385  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.299774  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:01.709356  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:01.759088  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:01.761882  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.761921  118459 retry.go:31] will retry after 6.257319935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.799124  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.799237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.299268  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.299716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.799390  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.799502  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.799880  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:02.799960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:03.299492  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.299563  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.299925  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.798665  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.798754  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.940379  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:03.990275  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:03.993084  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:03.993122  118459 retry.go:31] will retry after 4.028920288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:04.298653  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.299341  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:04.798956  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.799033  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:05.299051  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.299176  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.299598  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:05.299657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:05.799285  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.799356  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.799725  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.299393  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.299841  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.799593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.799944  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.299053  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.798714  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.798786  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.799261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:07.799325  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:08.019559  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:08.023109  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:08.072023  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.074947  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074963  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074982  118459 retry.go:31] will retry after 6.922745297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.076401  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.076428  118459 retry.go:31] will retry after 5.441570095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.298802  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.299153  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:08.799104  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.799539  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.299229  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.299310  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.299686  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.799379  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.799472  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.799807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:09.799869  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:10.299531  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.299603  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.299958  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:10.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.799011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.298647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.299123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.798895  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.799225  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:12.298842  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.298915  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:12.299310  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:12.798893  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.299008  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.518328  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:13.572977  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:13.573020  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.573038  118459 retry.go:31] will retry after 15.052611026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.798632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.798973  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.298894  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.299223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.798866  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.798962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:14.799351  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:14.998673  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:15.051035  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:15.051092  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.051116  118459 retry.go:31] will retry after 7.550335313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.299491  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.299568  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:15.799546  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.799646  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.800035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.298586  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.299006  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:17.298969  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.299043  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:17.299467  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:17.798964  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.299415  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.799349  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.799698  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:19.299431  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.299558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.299972  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:19.300047  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:19.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.299042  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.798691  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.798998  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.298572  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.298698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.299121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:21.799149  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:22.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:22.602557  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:22.653552  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:22.656108  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.656138  118459 retry.go:31] will retry after 31.201355729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.799459  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.799558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.799901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.299026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.798988  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.799061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:23.799539  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:24.299048  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.299131  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.299558  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:24.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.799285  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.799622  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.299437  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.299594  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.299994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.799056  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:26.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.298737  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.299066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:26.299138  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:26.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.799032  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.298934  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.299032  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.798977  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:28.298998  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.299130  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.299524  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:28.299599  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:28.625918  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:28.675593  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:28.678080  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.678122  118459 retry.go:31] will retry after 23.952219527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.799477  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.799570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.799970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.298589  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.298685  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.798713  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.798787  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.799221  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.298792  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.299229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.798891  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.799335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:30.799398  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:31.298936  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.299373  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:31.798930  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.799039  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.299072  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.799097  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.799529  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:32.799596  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:33.299230  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.299325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.299740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:33.798515  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.798587  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.798936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.299656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.798590  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.798664  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.799020  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:35.298588  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.298666  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.299052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:35.299143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:35.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.299007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.798626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:37.298948  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.299051  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:37.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:37.799006  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.799086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.799417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.299020  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.299100  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.299469  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.799369  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.799927  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:39.299580  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.299693  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.300082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:39.300150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:39.798611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.799046  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.298592  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.298670  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.798637  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.299138  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.798729  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.798815  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.799152  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:41.799215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:42.298723  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.298799  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.299170  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:42.798731  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.798836  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.799203  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.298908  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.299278  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.799167  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.799250  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:43.799661  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:44.299314  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.299416  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.299827  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:44.799577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.799657  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.800048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.298599  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.299047  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:46.298671  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.299126  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:46.299191  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:46.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.798850  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.799223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.299119  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.299231  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.299611  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.799336  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.799765  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:48.299501  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.299582  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.299947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:48.300006  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:48.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.798729  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.298752  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.798901  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.798982  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.298921  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.299003  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.798955  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.799416  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:50.799534  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:51.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.299214  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.299601  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:51.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.799388  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.799753  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.299413  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.299503  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.299839  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.631482  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:52.682310  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:52.684872  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.684901  118459 retry.go:31] will retry after 32.790446037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.799279  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.799368  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.799719  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:52.799778  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:53.299429  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.299873  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.799081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.858347  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:53.912029  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:53.912083  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:53.912107  118459 retry.go:31] will retry after 18.370397631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:54.298601  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:54.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.799095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:55.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.299226  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:55.299302  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:55.798903  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.798996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.298927  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.299347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:57.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.299509  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:57.299581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:57.799169  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.799283  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.299318  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.299391  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.299772  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.799563  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.799658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.800017  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.298677  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.299050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.798757  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:59.799217  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:00.298721  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.298821  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:00.798884  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.799337  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.298871  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.298949  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.299314  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.798878  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.799285  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:01.799345  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:02.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.299353  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:02.798928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.799012  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.799359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.298939  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.299014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.799249  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:03.799744  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:04.299367  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.299468  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.299800  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:04.799513  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.799614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.798722  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.799201  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:06.298786  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.298890  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.299232  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:06.299292  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:06.798807  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.798900  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.799230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.299263  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.299613  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.799343  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.799420  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.799763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:08.299428  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.299527  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.299872  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:08.299937  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:08.798593  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.798667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.799001  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.298582  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.798617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.798698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.298622  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.799101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:10.799164  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:11.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:11.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.282739  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:45:12.299378  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.299488  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.299877  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.333950  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336622  118459 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:12.799135  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.799209  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:12.799657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:13.299289  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.299709  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:13.798861  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.798943  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.298849  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.298932  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.299258  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.799040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:15.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.299098  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:15.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:15.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.799155  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.799530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.299229  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.299576  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.799320  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.799402  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.799740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.298566  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:17.799082  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:18.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.298700  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:18.798851  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.798935  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.298852  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.299298  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.798906  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.798988  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.799347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:19.799406  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:20.298933  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.299355  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:20.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.799025  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.799390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.298968  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.299041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.799011  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.799369  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:22.299008  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.299101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.299519  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:22.299580  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:22.799213  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.799289  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.299390  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.299767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.799544  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.799617  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.799951  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.298561  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.298641  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.798607  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.799048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:24.799112  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:25.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:25.476423  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:45:25.531081  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531142  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531259  118459 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:25.534376  118459 out.go:179] * Enabled addons: 
	I1008 14:45:25.535655  118459 addons.go:514] duration metric: took 1m38.356657385s for enable addons: enabled=[]
	I1008 14:45:25.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.798640  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.798959  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.298537  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.299011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.798610  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.798686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:26.799185  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:27.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.299111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:27.799210  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.799306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.799715  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.299395  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.299520  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.299905  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.798594  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:29.298630  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:29.299127  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:29.798717  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.798816  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.799196  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.299218  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.798893  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.799252  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:31.298834  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.299230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:31.299294  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:31.798829  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.798912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.799264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.298806  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.299262  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.799271  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:33.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.298966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.299345  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:33.299417  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:33.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.799654  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.299321  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.299423  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.299763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.799422  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.799533  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.799902  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.298559  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.298639  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.798592  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:35.799128  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:36.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.299156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:36.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.798779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.799148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.299530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:37.799713  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:38.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.299405  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.299766  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:38.799558  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.799667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.800040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.298689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.798644  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.799106  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:40.298658  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.299095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:40.299169  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:40.798657  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.799078  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.298629  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.798741  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.799102  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:42.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.299168  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:42.299237  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:42.798716  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.798788  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.298801  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.799130  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.799591  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:44.299252  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.299339  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.299712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:44.299773  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:44.799365  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.799825  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.299172  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.299287  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.299676  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.799167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.298781  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.298881  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.299294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.798856  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.798931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.799293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:46.799356  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:47.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.299246  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:47.799327  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.799406  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.299439  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.299542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.299919  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.798704  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:49.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:49.299162  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:49.798684  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.799141  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.298714  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.298795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.299144  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.798776  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.798853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.799207  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:51.298712  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.298791  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.299166  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:51.299231  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:51.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.798829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.799189  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.298885  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.299246  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.799319  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.298699  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.298776  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.299137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.799143  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.799505  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:53.799579  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:54.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.299276  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.299636  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:54.799331  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.799784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.299472  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.798585  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.798665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:56.298627  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:56.299148  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:56.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.799077  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.299523  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.799274  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.799642  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:58.299356  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.299473  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.299961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:58.300023  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:58.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.799059  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.298721  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.798755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.798766  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.798873  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.799228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:00.799293  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:01.298587  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.299023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:01.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.798731  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.799123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.298698  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.799202  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:03.298750  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.298833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:03.299244  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:03.799037  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.799122  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.799491  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.299167  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.299249  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.299630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.799414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.799795  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:05.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.299956  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:05.300019  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:05.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.298578  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.799117  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.299118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.299493  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.799139  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.799496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:07.799569  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:08.299035  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.299126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:08.799377  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.799812  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.298529  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.298607  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.298931  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.799111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:10.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.299130  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:10.299230  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:10.798708  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.798795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.298650  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.298984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.798571  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.798994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.299013  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.798609  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.799038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:12.799099  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:13.298602  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:13.798949  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.799028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.799365  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.299036  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.299417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.798995  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:14.799507  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:15.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:15.798739  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.299195  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.798747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.799211  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:17.299171  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.299252  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.299620  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:17.299687  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:17.799351  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.799429  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.799815  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.299581  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.299663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.300026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.798911  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.798995  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.799361  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.299017  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.798976  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.799059  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:19.799484  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:20.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.299063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.299433  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:20.799000  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.799073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.799422  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.299052  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.798986  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.799475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:21.799540  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:22.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.299073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.299421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:22.799016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.799089  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.299012  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.299086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.799352  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.799434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.799781  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:23.799842  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:24.299407  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.299843  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:24.799556  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.799961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.298635  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.298981  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.799082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:26.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.299076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:26.299150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:26.798664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.298937  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.299013  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.299343  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.798999  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:28.298903  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.298998  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.299342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:28.299409  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:28.799216  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.799293  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.299414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.299824  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.799545  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.298574  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.298654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.299010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.799063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:30.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:31.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.299084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:31.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.799089  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.298660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.798689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.798772  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.799169  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:32.799234  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:33.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:33.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.799101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.299040  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.299520  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.799151  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.799224  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.799552  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:34.799606  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:35.299196  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.299279  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:35.799293  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.799369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.799727  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.299400  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.299857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.799528  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.799601  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:36.799998  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:37.298659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.299094  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:37.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.798758  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.799112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.298715  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.298793  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.299167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.799005  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.799470  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:39.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.299482  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:39.299547  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:39.799057  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.799149  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.299162  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.299239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.299588  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.799254  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.799325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.799695  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:41.299348  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.299424  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.299798  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:41.299888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:41.799486  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.799571  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.799908  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.299014  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.798601  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.799021  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.298597  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.298675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.299015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.798718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:43.799158  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:44.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.299079  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:44.798646  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.298651  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.298724  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.798658  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:45.799190  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:46.298664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.298740  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.299081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:46.798660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.299010  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.299116  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.299468  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.799515  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:47.799577  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:48.299145  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.299237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.299586  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:48.799465  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.799540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.799893  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.299567  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.300081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.798774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.799156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:50.298747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.298852  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:50.299334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:50.798849  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.798940  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.799370  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.298974  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.299474  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.799088  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.799617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:52.299319  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.299399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.299750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:52.299815  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:52.799425  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.799532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.799968  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.298596  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.299057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.798951  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.799031  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.799358  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.298997  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.299141  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.299485  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.799052  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:54.799557  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:55.299016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.299471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:55.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.799427  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.299476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.799071  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:57.299385  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.299507  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.299911  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:57.299974  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:57.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.799954  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.298614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.298971  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.798638  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.798717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.298676  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.299184  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.798757  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.798865  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.799194  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:59.799261  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:00.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.299242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:00.798799  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.798882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.298869  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.298960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.299308  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.798868  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.798957  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:01.799395  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:02.298910  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.299004  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.299367  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:02.798967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.799471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.299109  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.799358  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.799437  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.799820  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:03.799888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:04.299467  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.299570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:04.798525  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.798605  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.798957  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.299064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:06.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.298755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.299139  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:06.299201  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:06.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.798775  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.799212  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.299173  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.299680  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.799348  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.799431  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.799818  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:08.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.299559  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.299887  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:08.299953  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:08.798622  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.298666  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.298743  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.299110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.798767  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.298823  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.299192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.799192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:10.799264  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:11.298772  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.298854  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.299193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:11.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.798887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.799274  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.298832  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.298912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.299277  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.798808  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.798896  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.799275  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:12.799334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:13.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.298906  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:13.799086  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.799171  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.799549  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.299233  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.299317  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.299685  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.799321  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.799395  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.799748  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:14.799845  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:15.299364  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.299434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.299756  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:15.799417  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.799861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.299614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.299915  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.798573  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.799007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:17.298827  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.299306  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:17.299381  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:17.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.798968  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.799302  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.298694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.799418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:19.299079  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.299153  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.299571  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:19.299630  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:19.799185  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.799262  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.799651  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.299313  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.299398  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.299801  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.800024  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:21.799168  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:22.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.298730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:22.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.798732  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.298704  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.298779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.299115  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.798943  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.799042  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:23.799509  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:24.298964  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.299040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.299390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:24.798583  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.798690  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.298624  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.299069  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.798756  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:26.298675  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:26.299192  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:26.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.799142  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.299005  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.299090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.299419  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.799045  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.799137  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.799544  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:28.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.299617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:28.299678  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:28.799473  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.799560  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.799899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.299985  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.798622  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.798983  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.298553  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.298632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.298995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.798697  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:30.799179  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:31.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.298695  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.299073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:31.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.298977  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.798588  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.798663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.799041  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:33.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:33.299097  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:33.798957  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.299095  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.299494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:35.299241  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:35.299795  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:35.799437  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.799530  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.799892  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.299548  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.798599  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.798674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.298967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.299050  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.299424  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.799403  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:37.799496  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:38.298988  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.299067  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.299408  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:38.799345  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.799481  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.799859  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.299510  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.299593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.299976  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:40.298711  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.298796  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:40.299245  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:40.798752  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.798837  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.799193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.298853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.299237  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.798946  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.799303  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:42.298889  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.298962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.299322  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:42.299384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:42.798944  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.298977  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.299047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.299368  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.799221  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.799302  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.799663  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:44.299294  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.299790  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:44.299872  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:44.799433  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.799542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.799888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.299563  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.299636  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.299993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:46.299512  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.299633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.300025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:46.300089  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:46.798790  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.798884  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.799229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.299087  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.299184  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.299563  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.798932  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.799009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.799428  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.299029  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.299106  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.299501  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.799380  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.799486  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.799833  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:48.799903  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:49.299564  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.300007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:49.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.799052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:51.298640  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.299093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:51.299156  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:51.798681  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.798761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.799132  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.298710  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.298829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.798883  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.799265  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:53.298856  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.298931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:53.299362  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:53.799190  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.799266  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.299296  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.799472  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.799553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.799952  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.298584  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.298660  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.798627  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.798713  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:55.799173  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:56.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.298834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:56.798788  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.798866  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.799242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.299122  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.299496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.799239  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.799714  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:57.799774  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:58.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.299464  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.299809  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:58.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.798672  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.799025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.298591  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.298674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.798618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.798694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.799057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:00.298633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:00.299182  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:00.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.799076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.298687  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.298762  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.299124  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.798694  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.798782  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.799125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.298730  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.298807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.299143  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:02.799242  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:03.298766  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.299191  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:03.799090  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.799168  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.799556  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.798656  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:05.298725  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.298803  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.299148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:05.299215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:05.798756  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.798859  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.298856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.299228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.799046  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.799394  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:07.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.299273  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:07.299732  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:07.799538  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.799609  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.799950  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.299147  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.799521  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:09.299345  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.299428  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.299805  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:09.299871  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:09.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.298815  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.298898  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.799063  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.799142  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.799548  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:11.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.299512  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.299861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:11.299938  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:11.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.298858  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.298934  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.298773  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.298847  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.799118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.799495  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:13.799564  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:14.299338  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.299418  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.299784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:14.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.798633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.798966  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.299111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.798836  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:16.299034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.299119  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.299472  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:16.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:16.799263  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.799716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.299984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.799093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.298690  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.298768  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.299127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.798926  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.799002  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:18.799405  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:19.298954  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.299028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.299371  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:19.798980  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.299425  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.798994  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.799140  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.799508  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:20.799581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:21.299202  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.299281  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.299656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:21.799334  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.799412  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.799779  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.299478  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.299564  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.798566  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.798990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:23.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.298653  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:23.299069  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:23.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.799024  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.298958  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.299387  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.799037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:25.299272  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.299346  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:25.299785  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:25.799564  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.799644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.800010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.298851  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.299197  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.798945  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.799020  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:27.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.299762  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:27.299828  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:27.799408  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.799498  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.799868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.299505  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.299589  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.299938  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.798710  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.799066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.298603  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.299072  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.799067  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:29.799143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:30.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.298723  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:30.798639  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.798719  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.298623  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:32.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.299071  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:32.299152  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:32.798666  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.798747  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.799135  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.298695  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.798993  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.799069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:34.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.299476  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.299807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:34.299873  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:34.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.798675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.298918  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.299259  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.799014  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.299386  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.299754  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.798548  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.798627  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:36.799056  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:37.298853  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.298929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.299261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:37.798581  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.298605  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.799034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:38.799603  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:39.299424  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.299514  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.299862  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:39.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.799092  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.298907  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.298997  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.299335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.799204  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.799649  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:40.799728  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:41.299541  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.299632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.299970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:41.798741  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.798831  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.799187  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.298986  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.299069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.299473  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.799301  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.799376  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.799728  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:42.799794  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:43.298557  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.298631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.299030  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:43.798919  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.799001  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.799377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.299220  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.299306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.299666  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.799308  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.799379  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.799750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:45.299391  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.299504  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.299837  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:45.299906  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:45.799476  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.799562  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.799953  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.298535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.298610  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.298988  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.798683  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.799014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:47.799500  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:48.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.299084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.299436  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:48.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.799397  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.799757  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.299469  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.299546  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.798748  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.799121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:50.298729  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.298811  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.299173  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:50.299238  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:50.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.798856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.799248  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.298812  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.298897  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.798948  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:52.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.299070  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:52.299545  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:52.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.799504  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.299161  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.299264  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.299675  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.799435  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.799534  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.799875  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.298718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.299112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.798929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.799294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:54.799357  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:55.299157  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.299235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.299606  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:55.799386  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.799470  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.799852  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.299065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.798779  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.798868  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.799243  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:57.299138  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.299227  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.299600  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:57.299666  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:57.799470  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.799545  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.799918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.298679  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.298761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.299149  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.799015  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.799090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:59.299293  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.299392  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.299742  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:59.299808  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:59.798577  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.299326  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.799153  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:01.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.299553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.299898  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:01.299965  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:01.798701  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.298874  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.299315  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.799145  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.799228  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.799568  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.299513  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.798557  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.799073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:03.799140  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:04.298885  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.298976  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.299401  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:04.799261  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.799710  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.299549  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.299642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.300048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.798774  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.798849  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.799206  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:05.799268  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:06.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.299053  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:06.799240  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.799328  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.799681  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.299414  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.299532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.799044  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:08.298825  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:08.299350  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:08.799137  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.799221  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.799589  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.299540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.299921  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.799064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:10.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.298925  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.299313  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:10.299380  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:10.799149  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.799223  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.799572  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.299419  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.299531  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.299928  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.798698  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.798777  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.799140  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:12.298875  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.299357  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:12.299428  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:12.799215  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.799641  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.299434  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.299538  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.299901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.798658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.798993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.298718  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.298806  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.299190  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.798984  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.799423  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:14.799511  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:15.299254  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.299343  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:15.798574  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.798655  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.298700  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.298800  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.299145  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.799300  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:17.299095  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.299193  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.299535  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:17.299597  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:17.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.799337  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.299759  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.799524  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.799598  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:19.299552  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.299638  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:19.300058  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:19.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.299002  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.798789  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.298846  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.298952  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.299301  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.799159  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.799239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.799630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:21.799697  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:22.299522  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.299619  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.299991  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:22.798758  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.798834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.799181  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.299061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.299437  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.799357  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.799433  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.799786  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:23.799850  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:24.298547  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:24.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.798835  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.799161  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.298901  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.298996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.299334  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.799154  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.799236  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.799604  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:26.299399  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.299521  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.299888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:26.299960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:26.798629  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.799035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.298805  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.298901  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.299256  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.798972  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.799378  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.299186  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.799616  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.800091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:28.800170  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:29.298943  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.299021  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.299362  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:29.799176  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.799282  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.299485  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.299566  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.299899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.798586  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:31.298771  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.299157  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:31.299210  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:31.798882  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.798989  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.299195  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.299278  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.299631  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.799405  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.799515  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.799866  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.298635  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.798843  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.798922  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.799266  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:33.799342  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:34.299019  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.299432  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:34.799270  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.799358  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.799712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.299543  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.299995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.798712  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.798807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.799171  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:36.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.298739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:36.299199  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:36.798682  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.299039  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.299475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.799319  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.799403  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.298633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.298999  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.799060  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:38.799123  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:39.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.298919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:39.799162  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.799585  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.299409  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.299508  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.299869  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.799084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:40.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:41.298831  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.298921  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:41.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.299467  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.299819  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.798568  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.798643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.798984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:43.298738  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.298822  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:43.299318  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:43.799035  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.799483  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.299382  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.299773  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.798575  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.799012  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.298748  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.298824  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.299159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.798886  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.798960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.799321  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:45.799384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:46.299022  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.299330  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:46.798742  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.798830  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.799234  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:47.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:49:47.299208  118459 node_ready.go:38] duration metric: took 6m0.000826952s for node "functional-367186" to be "Ready" ...
	I1008 14:49:47.302039  118459 out.go:203] 
	W1008 14:49:47.303804  118459 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 14:49:47.303820  118459 out.go:285] * 
	W1008 14:49:47.305511  118459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:49:47.306606  118459 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 14:49:40 functional-367186 crio[2943]: time="2025-10-08T14:49:40.462892455Z" level=info msg="createCtr: removing container 8651f476039be7edc94ef50784c528612ba9c7504c2e7a8ee289820d1780bb48" id=aa6cd264-7360-4f24-a9ec-be4053570fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:40 functional-367186 crio[2943]: time="2025-10-08T14:49:40.462919806Z" level=info msg="createCtr: deleting container 8651f476039be7edc94ef50784c528612ba9c7504c2e7a8ee289820d1780bb48 from storage" id=aa6cd264-7360-4f24-a9ec-be4053570fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:40 functional-367186 crio[2943]: time="2025-10-08T14:49:40.465060835Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-367186_kube-system_c58427f58fdd58b4fdb4fadaedd99fdb_0" id=aa6cd264-7360-4f24-a9ec-be4053570fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.436638949Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a4632e23-5922-462a-a3da-a900330698c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.437472378Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=47ae67a6-9e88-4255-8bae-b89ffdfc7dfe name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.438306148Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.438529725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.441687675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.442240801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.464500429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.465884187Z" level=info msg="createCtr: deleting container ID 4de22756f9b5388c90e04889e02afb0fb4239a79f7d3dd3054855889e675334f from idIndex" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.465930829Z" level=info msg="createCtr: removing container 4de22756f9b5388c90e04889e02afb0fb4239a79f7d3dd3054855889e675334f" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.465963795Z" level=info msg="createCtr: deleting container 4de22756f9b5388c90e04889e02afb0fb4239a79f7d3dd3054855889e675334f from storage" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.468045769Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.436890997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a8969f44-0f4e-4c5c-955a-6ae3ad79f3a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.437871883Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4b99943c-c84c-4270-9a6b-a336ea2755ae name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.440800008Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.441097787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.444672553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.445085701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.460021036Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.461676485Z" level=info msg="createCtr: deleting container ID d5911b14bcb6c6aefc1a913b29c52db4c43b0697dba39c99c3f1c55cb1abf37f from idIndex" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.461723396Z" level=info msg="createCtr: removing container d5911b14bcb6c6aefc1a913b29c52db4c43b0697dba39c99c3f1c55cb1abf37f" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.461764456Z" level=info msg="createCtr: deleting container d5911b14bcb6c6aefc1a913b29c52db4c43b0697dba39c99c3f1c55cb1abf37f from storage" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.464213396Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:49:48.943724    4355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:48.944401    4355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:48.945929    4355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:48.946367    4355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:48.947898    4355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 14:49:48 up  2:32,  0 user,  load average: 0.14, 0.06, 0.45
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 14:49:40 functional-367186 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c58427f58fdd58b4fdb4fadaedd99fdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:40 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:40 functional-367186 kubelet[1801]: E1008 14:49:40.465486    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c58427f58fdd58b4fdb4fadaedd99fdb"
	Oct 08 14:49:41 functional-367186 kubelet[1801]: E1008 14:49:41.066870    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-367186.186c8afed11699ef\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8afed11699ef  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:39:41.429266927 +0000 UTC m=+0.550355432,LastTimestamp:2025-10-08 14:39:41.43072231 +0000 UTC m=+0.551810801,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-367186,}"
	Oct 08 14:49:41 functional-367186 kubelet[1801]: E1008 14:49:41.483256    1801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 14:49:42 functional-367186 kubelet[1801]: E1008 14:49:42.113576    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 14:49:42 functional-367186 kubelet[1801]: I1008 14:49:42.326193    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 14:49:42 functional-367186 kubelet[1801]: E1008 14:49:42.326601    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.436207    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.468290    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:44 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:44 functional-367186 kubelet[1801]:  > podSandboxID="4f5c4547ba25f8047b1a01ec096a800bad6487d4d0d0fe8fd4a152424b0efbf9"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.468378    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:44 functional-367186 kubelet[1801]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:44 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.468407    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.436410    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.464562    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:47 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:47 functional-367186 kubelet[1801]:  > podSandboxID="4a13bc9351a22b93554dcee46226666905c4e1638ab46a476341d1435096d9d8"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.464667    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:47 functional-367186 kubelet[1801]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:47 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.464699    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 14:49:48 functional-367186 kubelet[1801]: E1008 14:49:48.243246    1801 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (300.485109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.18s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-367186 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-367186 get po -A: exit status 1 (51.531019ms)

                                                
                                                
** stderr ** 
	E1008 14:49:49.863604  122092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:49.863948  122092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:49.865527  122092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:49.865841  122092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:49.867219  122092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-367186 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1008 14:49:49.863604  122092 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1008 14:49:49.863948  122092 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1008 14:49:49.865527  122092 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1008 14:49:49.865841  122092 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1008 14:49:49.867219  122092 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-367186 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-367186 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (288.442226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-840888                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-840888   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ --download-only -p download-docker-250844 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-250844 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p download-docker-250844                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-250844 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ --download-only -p binary-mirror-198013 --alsologtostderr --binary-mirror http://127.0.0.1:41765 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-198013   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p binary-mirror-198013                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-198013   │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ addons  │ enable dashboard -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ addons  │ disable dashboard -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ start   │ -p addons-541206 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ -p addons-541206                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-541206          │ jenkins │ v1.37.0 │ 08 Oct 25 14:26 UTC │ 08 Oct 25 14:26 UTC │
	│ start   │ -p nospam-526605 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-526605 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:26 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-526605          │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-367186      │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ -p functional-367186 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-367186      │ jenkins │ v1.37.0 │ 08 Oct 25 14:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:43:43
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:43:43.627861  118459 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:43:43.627954  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.627958  118459 out.go:374] Setting ErrFile to fd 2...
	I1008 14:43:43.627962  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.628171  118459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:43:43.628614  118459 out.go:368] Setting JSON to false
	I1008 14:43:43.629495  118459 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8775,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:43:43.629593  118459 start.go:141] virtualization: kvm guest
	I1008 14:43:43.631500  118459 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:43:43.632767  118459 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:43:43.632773  118459 notify.go:220] Checking for updates...
	I1008 14:43:43.634937  118459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:43:43.636218  118459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:43.640666  118459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:43:43.642185  118459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:43:43.643421  118459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:43:43.644930  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:43.645039  118459 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:43:43.667985  118459 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:43:43.668119  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.723136  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.713080092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.723287  118459 docker.go:318] overlay module found
	I1008 14:43:43.725936  118459 out.go:179] * Using the docker driver based on existing profile
	I1008 14:43:43.727069  118459 start.go:305] selected driver: docker
	I1008 14:43:43.727087  118459 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.727171  118459 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:43:43.727263  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.781426  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.772365606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.782086  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:43.782179  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:43.782243  118459 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.784039  118459 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:43:43.785148  118459 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:43:43.786245  118459 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:43:43.787146  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:43.787178  118459 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:43:43.787189  118459 cache.go:58] Caching tarball of preloaded images
	I1008 14:43:43.787237  118459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:43:43.787273  118459 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:43:43.787283  118459 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:43:43.787359  118459 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:43:43.806536  118459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:43:43.806562  118459 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:43:43.806584  118459 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:43:43.806623  118459 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:43:43.806704  118459 start.go:364] duration metric: took 49.444µs to acquireMachinesLock for "functional-367186"
	I1008 14:43:43.806736  118459 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:43:43.806747  118459 fix.go:54] fixHost starting: 
	I1008 14:43:43.806975  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:43.822750  118459 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:43:43.822776  118459 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:43:43.824577  118459 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:43:43.824603  118459 machine.go:93] provisionDockerMachine start ...
	I1008 14:43:43.824673  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:43.841160  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:43.841463  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:43.841483  118459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:43:43.985591  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:43.985624  118459 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:43:43.985682  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.003073  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.003294  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.003316  118459 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:43:44.156671  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:44.156765  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.173583  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.173820  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.173845  118459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:43:44.319171  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:43:44.319200  118459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:43:44.319238  118459 ubuntu.go:190] setting up certificates
	I1008 14:43:44.319253  118459 provision.go:84] configureAuth start
	I1008 14:43:44.319306  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:44.337134  118459 provision.go:143] copyHostCerts
	I1008 14:43:44.337168  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337204  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:43:44.337226  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337295  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:43:44.337373  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337398  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:43:44.337405  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337431  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:43:44.337503  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337524  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:43:44.337531  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337557  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:43:44.337611  118459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:43:44.449681  118459 provision.go:177] copyRemoteCerts
	I1008 14:43:44.449756  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:43:44.449792  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.466984  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:44.569881  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:43:44.569953  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:43:44.587517  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:43:44.587583  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:43:44.605065  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:43:44.605124  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:43:44.622323  118459 provision.go:87] duration metric: took 303.055536ms to configureAuth
	I1008 14:43:44.622354  118459 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:43:44.622537  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:44.622644  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.639387  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.639612  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.639636  118459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:43:44.900547  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:43:44.900571  118459 machine.go:96] duration metric: took 1.07595926s to provisionDockerMachine
	I1008 14:43:44.900586  118459 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:43:44.900600  118459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:43:44.900655  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:43:44.900706  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.917783  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.020925  118459 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:43:45.024356  118459 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1008 14:43:45.024381  118459 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1008 14:43:45.024389  118459 command_runner.go:130] > VERSION_ID="12"
	I1008 14:43:45.024395  118459 command_runner.go:130] > VERSION="12 (bookworm)"
	I1008 14:43:45.024402  118459 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1008 14:43:45.024406  118459 command_runner.go:130] > ID=debian
	I1008 14:43:45.024410  118459 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1008 14:43:45.024415  118459 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1008 14:43:45.024420  118459 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1008 14:43:45.024512  118459 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:43:45.024537  118459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:43:45.024550  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:43:45.024614  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:43:45.024709  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:43:45.024722  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 14:43:45.024832  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:43:45.024842  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> /etc/test/nested/copy/98900/hosts
	I1008 14:43:45.024895  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:43:45.032438  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:45.049657  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:43:45.066943  118459 start.go:296] duration metric: took 166.34143ms for postStartSetup
	I1008 14:43:45.067016  118459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:43:45.067050  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.084921  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.184592  118459 command_runner.go:130] > 50%
	I1008 14:43:45.184676  118459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:43:45.188918  118459 command_runner.go:130] > 148G
	I1008 14:43:45.189157  118459 fix.go:56] duration metric: took 1.382403598s for fixHost
	I1008 14:43:45.189184  118459 start.go:83] releasing machines lock for "functional-367186", held for 1.382467794s
	I1008 14:43:45.189256  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:45.206786  118459 ssh_runner.go:195] Run: cat /version.json
	I1008 14:43:45.206834  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.206924  118459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:43:45.207047  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.224940  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.226308  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.323475  118459 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1008 14:43:45.323661  118459 ssh_runner.go:195] Run: systemctl --version
	I1008 14:43:45.374536  118459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 14:43:45.376350  118459 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1008 14:43:45.376387  118459 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1008 14:43:45.376484  118459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:43:45.412862  118459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:43:45.417295  118459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 14:43:45.417656  118459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:43:45.417717  118459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:43:45.425598  118459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:43:45.425618  118459 start.go:495] detecting cgroup driver to use...
	I1008 14:43:45.425645  118459 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:43:45.425686  118459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:43:45.440680  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:43:45.452844  118459 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:43:45.452899  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:43:45.466598  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:43:45.477998  118459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:43:45.564577  118459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:43:45.653273  118459 docker.go:234] disabling docker service ...
	I1008 14:43:45.653343  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:43:45.667540  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:43:45.679916  118459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:43:45.764673  118459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:43:45.852326  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:43:45.864944  118459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:43:45.878738  118459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 14:43:45.878793  118459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:43:45.878844  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.887987  118459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:43:45.888052  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.896857  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.905895  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.914639  118459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:43:45.922953  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.931880  118459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.940059  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.948635  118459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:43:45.955347  118459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 14:43:45.956050  118459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:43:45.963162  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.045488  118459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:43:46.156934  118459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:43:46.156997  118459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:43:46.161038  118459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 14:43:46.161067  118459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 14:43:46.161077  118459 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1008 14:43:46.161086  118459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.161094  118459 command_runner.go:130] > Access: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161118  118459 command_runner.go:130] > Modify: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161129  118459 command_runner.go:130] > Change: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161138  118459 command_runner.go:130] >  Birth: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161173  118459 start.go:563] Will wait 60s for crictl version
	I1008 14:43:46.161212  118459 ssh_runner.go:195] Run: which crictl
	I1008 14:43:46.164650  118459 command_runner.go:130] > /usr/local/bin/crictl
	I1008 14:43:46.164746  118459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:43:46.189255  118459 command_runner.go:130] > Version:  0.1.0
	I1008 14:43:46.189279  118459 command_runner.go:130] > RuntimeName:  cri-o
	I1008 14:43:46.189294  118459 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1008 14:43:46.189299  118459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 14:43:46.189317  118459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:43:46.189365  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.215704  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.215734  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.215741  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.215746  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.215750  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.215755  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.215762  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.215770  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.215806  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.215819  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.215825  118459 command_runner.go:130] >      static
	I1008 14:43:46.215835  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.215846  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.215857  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.215867  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.215877  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.215885  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.215897  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.215909  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.215921  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.217136  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.243203  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.243231  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.243241  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.243249  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.243256  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.243264  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.243272  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.243281  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.243293  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.243299  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.243304  118459 command_runner.go:130] >      static
	I1008 14:43:46.243312  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.243317  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.243327  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.243336  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.243348  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.243358  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.243374  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.243382  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.243390  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.246714  118459 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:43:46.248034  118459 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:43:46.264534  118459 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:43:46.268778  118459 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1008 14:43:46.268905  118459 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:43:46.269051  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:46.269113  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.298040  118459 command_runner.go:130] > {
	I1008 14:43:46.298059  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.298064  118459 command_runner.go:130] >     {
	I1008 14:43:46.298072  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.298077  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298082  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.298087  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298091  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298100  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.298109  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.298112  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298117  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.298121  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298138  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298146  118459 command_runner.go:130] >     },
	I1008 14:43:46.298151  118459 command_runner.go:130] >     {
	I1008 14:43:46.298164  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.298170  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298175  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.298181  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298185  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298191  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.298201  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.298207  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298210  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.298217  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298225  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298234  118459 command_runner.go:130] >     },
	I1008 14:43:46.298243  118459 command_runner.go:130] >     {
	I1008 14:43:46.298255  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.298262  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298267  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.298273  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298277  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298283  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.298293  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.298298  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298302  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.298309  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.298315  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298323  118459 command_runner.go:130] >     },
	I1008 14:43:46.298328  118459 command_runner.go:130] >     {
	I1008 14:43:46.298341  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.298350  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298359  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.298362  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298371  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298380  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.298387  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.298393  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298398  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.298408  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298417  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298425  118459 command_runner.go:130] >       },
	I1008 14:43:46.298438  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298461  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298467  118459 command_runner.go:130] >     },
	I1008 14:43:46.298472  118459 command_runner.go:130] >     {
	I1008 14:43:46.298481  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.298490  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298499  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.298507  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298514  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298521  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.298532  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.298540  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298548  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.298557  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298566  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298573  118459 command_runner.go:130] >       },
	I1008 14:43:46.298579  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298588  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298597  118459 command_runner.go:130] >     },
	I1008 14:43:46.298602  118459 command_runner.go:130] >     {
	I1008 14:43:46.298612  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.298619  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298628  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.298636  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298647  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298662  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.298676  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.298684  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298690  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.298699  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298705  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298713  118459 command_runner.go:130] >       },
	I1008 14:43:46.298725  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298735  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298744  118459 command_runner.go:130] >     },
	I1008 14:43:46.298752  118459 command_runner.go:130] >     {
	I1008 14:43:46.298762  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.298784  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298800  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.298808  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298815  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298829  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.298843  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.298851  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298860  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.298864  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298867  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298871  118459 command_runner.go:130] >     },
	I1008 14:43:46.298882  118459 command_runner.go:130] >     {
	I1008 14:43:46.298891  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.298895  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298899  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.298903  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298907  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298914  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.298931  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.298937  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298941  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.298948  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298952  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298957  118459 command_runner.go:130] >       },
	I1008 14:43:46.298961  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298967  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298971  118459 command_runner.go:130] >     },
	I1008 14:43:46.298978  118459 command_runner.go:130] >     {
	I1008 14:43:46.298987  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.298996  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.299004  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.299025  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299035  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.299047  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.299060  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.299068  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299074  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.299081  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.299087  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.299095  118459 command_runner.go:130] >       },
	I1008 14:43:46.299100  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.299108  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.299113  118459 command_runner.go:130] >     }
	I1008 14:43:46.299117  118459 command_runner.go:130] >   ]
	I1008 14:43:46.299125  118459 command_runner.go:130] > }
	I1008 14:43:46.300090  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.300109  118459 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:43:46.300168  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.325949  118459 command_runner.go:130] > {
	I1008 14:43:46.325970  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.325974  118459 command_runner.go:130] >     {
	I1008 14:43:46.325985  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.325990  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.325996  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.325999  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326003  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326016  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.326031  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.326040  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326047  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.326055  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326063  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326068  118459 command_runner.go:130] >     },
	I1008 14:43:46.326072  118459 command_runner.go:130] >     {
	I1008 14:43:46.326083  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.326089  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326094  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.326100  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326104  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326125  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.326136  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.326142  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326147  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.326151  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326158  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326163  118459 command_runner.go:130] >     },
	I1008 14:43:46.326166  118459 command_runner.go:130] >     {
	I1008 14:43:46.326172  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.326178  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326183  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.326188  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326192  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326201  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.326208  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.326213  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326219  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.326223  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.326226  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326229  118459 command_runner.go:130] >     },
	I1008 14:43:46.326232  118459 command_runner.go:130] >     {
	I1008 14:43:46.326238  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.326245  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326249  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.326252  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326256  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326262  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.326269  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.326275  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326279  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.326284  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326287  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326293  118459 command_runner.go:130] >       },
	I1008 14:43:46.326307  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326314  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326317  118459 command_runner.go:130] >     },
	I1008 14:43:46.326320  118459 command_runner.go:130] >     {
	I1008 14:43:46.326326  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.326331  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326335  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.326338  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326342  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326349  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.326358  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.326361  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326366  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.326369  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326373  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326378  118459 command_runner.go:130] >       },
	I1008 14:43:46.326382  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326385  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326392  118459 command_runner.go:130] >     },
	I1008 14:43:46.326395  118459 command_runner.go:130] >     {
	I1008 14:43:46.326401  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.326407  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326412  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.326415  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326419  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326429  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.326436  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.326453  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326460  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.326468  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326472  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326475  118459 command_runner.go:130] >       },
	I1008 14:43:46.326479  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326490  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326496  118459 command_runner.go:130] >     },
	I1008 14:43:46.326499  118459 command_runner.go:130] >     {
	I1008 14:43:46.326505  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.326511  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326515  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.326518  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326522  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326531  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.326538  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.326543  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326548  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.326551  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326555  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326558  118459 command_runner.go:130] >     },
	I1008 14:43:46.326561  118459 command_runner.go:130] >     {
	I1008 14:43:46.326567  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.326571  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326575  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.326578  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326582  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326588  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.326611  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.326617  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326621  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.326625  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326631  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326634  118459 command_runner.go:130] >       },
	I1008 14:43:46.326638  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326643  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326646  118459 command_runner.go:130] >     },
	I1008 14:43:46.326650  118459 command_runner.go:130] >     {
	I1008 14:43:46.326655  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.326666  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326673  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.326676  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326680  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326688  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.326698  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.326705  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326709  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.326714  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326718  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.326722  118459 command_runner.go:130] >       },
	I1008 14:43:46.326726  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326732  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.326735  118459 command_runner.go:130] >     }
	I1008 14:43:46.326738  118459 command_runner.go:130] >   ]
	I1008 14:43:46.326740  118459 command_runner.go:130] > }
	I1008 14:43:46.326842  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.326863  118459 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:43:46.326869  118459 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:43:46.326972  118459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:43:46.327030  118459 ssh_runner.go:195] Run: crio config
	I1008 14:43:46.368296  118459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 14:43:46.368332  118459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 14:43:46.368340  118459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 14:43:46.368344  118459 command_runner.go:130] > #
	I1008 14:43:46.368350  118459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 14:43:46.368356  118459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 14:43:46.368362  118459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 14:43:46.368376  118459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 14:43:46.368381  118459 command_runner.go:130] > # reload'.
	I1008 14:43:46.368392  118459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 14:43:46.368405  118459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 14:43:46.368418  118459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 14:43:46.368433  118459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 14:43:46.368458  118459 command_runner.go:130] > [crio]
	I1008 14:43:46.368472  118459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 14:43:46.368480  118459 command_runner.go:130] > # containers images, in this directory.
	I1008 14:43:46.368492  118459 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1008 14:43:46.368502  118459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 14:43:46.368514  118459 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1008 14:43:46.368525  118459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 14:43:46.368536  118459 command_runner.go:130] > # imagestore = ""
	I1008 14:43:46.368546  118459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 14:43:46.368559  118459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 14:43:46.368566  118459 command_runner.go:130] > # storage_driver = "overlay"
	I1008 14:43:46.368580  118459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 14:43:46.368587  118459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 14:43:46.368594  118459 command_runner.go:130] > # storage_option = [
	I1008 14:43:46.368599  118459 command_runner.go:130] > # ]
	I1008 14:43:46.368608  118459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 14:43:46.368621  118459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 14:43:46.368631  118459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 14:43:46.368640  118459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 14:43:46.368651  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 14:43:46.368666  118459 command_runner.go:130] > # always happen on a node reboot
	I1008 14:43:46.368678  118459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 14:43:46.368702  118459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 14:43:46.368714  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 14:43:46.368726  118459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 14:43:46.368736  118459 command_runner.go:130] > # version_file_persist = ""
	I1008 14:43:46.368751  118459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 14:43:46.368767  118459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 14:43:46.368775  118459 command_runner.go:130] > # internal_wipe = true
	I1008 14:43:46.368791  118459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 14:43:46.368802  118459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 14:43:46.368820  118459 command_runner.go:130] > # internal_repair = true
	I1008 14:43:46.368834  118459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 14:43:46.368847  118459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 14:43:46.368859  118459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 14:43:46.368869  118459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 14:43:46.368882  118459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 14:43:46.368891  118459 command_runner.go:130] > [crio.api]
	I1008 14:43:46.368900  118459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 14:43:46.368910  118459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 14:43:46.368921  118459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 14:43:46.368931  118459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 14:43:46.368942  118459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 14:43:46.368954  118459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 14:43:46.368963  118459 command_runner.go:130] > # stream_port = "0"
	I1008 14:43:46.368971  118459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 14:43:46.368981  118459 command_runner.go:130] > # stream_enable_tls = false
	I1008 14:43:46.368992  118459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 14:43:46.369002  118459 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 14:43:46.369012  118459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 14:43:46.369025  118459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369033  118459 command_runner.go:130] > # stream_tls_cert = ""
	I1008 14:43:46.369043  118459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 14:43:46.369055  118459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369075  118459 command_runner.go:130] > # stream_tls_key = ""
	I1008 14:43:46.369092  118459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 14:43:46.369106  118459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 14:43:46.369121  118459 command_runner.go:130] > # automatically pick up the changes.
	I1008 14:43:46.369130  118459 command_runner.go:130] > # stream_tls_ca = ""
	I1008 14:43:46.369153  118459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369163  118459 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1008 14:43:46.369176  118459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369186  118459 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1008 14:43:46.369197  118459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 14:43:46.369209  118459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 14:43:46.369219  118459 command_runner.go:130] > [crio.runtime]
	I1008 14:43:46.369229  118459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 14:43:46.369240  118459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 14:43:46.369246  118459 command_runner.go:130] > # "nofile=1024:2048"
	I1008 14:43:46.369260  118459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 14:43:46.369269  118459 command_runner.go:130] > # default_ulimits = [
	I1008 14:43:46.369275  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369288  118459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 14:43:46.369296  118459 command_runner.go:130] > # no_pivot = false
	I1008 14:43:46.369305  118459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 14:43:46.369317  118459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 14:43:46.369327  118459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 14:43:46.369338  118459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 14:43:46.369348  118459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 14:43:46.369359  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369368  118459 command_runner.go:130] > # conmon = ""
	I1008 14:43:46.369375  118459 command_runner.go:130] > # Cgroup setting for conmon
	I1008 14:43:46.369386  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 14:43:46.369393  118459 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 14:43:46.369402  118459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 14:43:46.369410  118459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 14:43:46.369421  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369430  118459 command_runner.go:130] > # conmon_env = [
	I1008 14:43:46.369435  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369456  118459 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 14:43:46.369465  118459 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 14:43:46.369475  118459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 14:43:46.369484  118459 command_runner.go:130] > # default_env = [
	I1008 14:43:46.369489  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369498  118459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 14:43:46.369516  118459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 14:43:46.369528  118459 command_runner.go:130] > # selinux = false
	I1008 14:43:46.369539  118459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 14:43:46.369555  118459 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1008 14:43:46.369564  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369570  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.369582  118459 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1008 14:43:46.369602  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369609  118459 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1008 14:43:46.369619  118459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 14:43:46.369631  118459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 14:43:46.369644  118459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 14:43:46.369653  118459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 14:43:46.369661  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369672  118459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 14:43:46.369680  118459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 14:43:46.369690  118459 command_runner.go:130] > # the cgroup blockio controller.
	I1008 14:43:46.369697  118459 command_runner.go:130] > # blockio_config_file = ""
	I1008 14:43:46.369709  118459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 14:43:46.369718  118459 command_runner.go:130] > # blockio parameters.
	I1008 14:43:46.369724  118459 command_runner.go:130] > # blockio_reload = false
	I1008 14:43:46.369735  118459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 14:43:46.369744  118459 command_runner.go:130] > # irqbalance daemon.
	I1008 14:43:46.369857  118459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 14:43:46.369873  118459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 14:43:46.369884  118459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 14:43:46.369898  118459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 14:43:46.369909  118459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 14:43:46.369924  118459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 14:43:46.369934  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369943  118459 command_runner.go:130] > # rdt_config_file = ""
	I1008 14:43:46.369950  118459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 14:43:46.369959  118459 command_runner.go:130] > # cgroup_manager = "systemd"
	I1008 14:43:46.369968  118459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 14:43:46.369979  118459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 14:43:46.369989  118459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 14:43:46.370002  118459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 14:43:46.370011  118459 command_runner.go:130] > # will be added.
	I1008 14:43:46.370027  118459 command_runner.go:130] > # default_capabilities = [
	I1008 14:43:46.370036  118459 command_runner.go:130] > # 	"CHOWN",
	I1008 14:43:46.370044  118459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 14:43:46.370051  118459 command_runner.go:130] > # 	"FSETID",
	I1008 14:43:46.370054  118459 command_runner.go:130] > # 	"FOWNER",
	I1008 14:43:46.370062  118459 command_runner.go:130] > # 	"SETGID",
	I1008 14:43:46.370083  118459 command_runner.go:130] > # 	"SETUID",
	I1008 14:43:46.370093  118459 command_runner.go:130] > # 	"SETPCAP",
	I1008 14:43:46.370099  118459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 14:43:46.370108  118459 command_runner.go:130] > # 	"KILL",
	I1008 14:43:46.370113  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370127  118459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 14:43:46.370140  118459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 14:43:46.370152  118459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 14:43:46.370164  118459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 14:43:46.370173  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370183  118459 command_runner.go:130] > default_sysctls = [
	I1008 14:43:46.370193  118459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 14:43:46.370198  118459 command_runner.go:130] > ]
	I1008 14:43:46.370209  118459 command_runner.go:130] > # List of devices on the host that a
	I1008 14:43:46.370249  118459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 14:43:46.370259  118459 command_runner.go:130] > # allowed_devices = [
	I1008 14:43:46.370266  118459 command_runner.go:130] > # 	"/dev/fuse",
	I1008 14:43:46.370270  118459 command_runner.go:130] > # 	"/dev/net/tun",
	I1008 14:43:46.370277  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370285  118459 command_runner.go:130] > # List of additional devices. specified as
	I1008 14:43:46.370300  118459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 14:43:46.370312  118459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 14:43:46.370324  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370333  118459 command_runner.go:130] > # additional_devices = [
	I1008 14:43:46.370341  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370351  118459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 14:43:46.370360  118459 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 14:43:46.370366  118459 command_runner.go:130] > # 	"/etc/cdi",
	I1008 14:43:46.370370  118459 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 14:43:46.370378  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370387  118459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 14:43:46.370400  118459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 14:43:46.370411  118459 command_runner.go:130] > # Defaults to false.
	I1008 14:43:46.370422  118459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 14:43:46.370434  118459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 14:43:46.370462  118459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 14:43:46.370470  118459 command_runner.go:130] > # hooks_dir = [
	I1008 14:43:46.370481  118459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 14:43:46.370491  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370503  118459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 14:43:46.370515  118459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 14:43:46.370526  118459 command_runner.go:130] > # its default mounts from the following two files:
	I1008 14:43:46.370532  118459 command_runner.go:130] > #
	I1008 14:43:46.370538  118459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 14:43:46.370550  118459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 14:43:46.370562  118459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 14:43:46.370571  118459 command_runner.go:130] > #
	I1008 14:43:46.370580  118459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 14:43:46.370593  118459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 14:43:46.370605  118459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 14:43:46.370615  118459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 14:43:46.370623  118459 command_runner.go:130] > #
	I1008 14:43:46.370629  118459 command_runner.go:130] > # default_mounts_file = ""
	I1008 14:43:46.370637  118459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 14:43:46.370647  118459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 14:43:46.370657  118459 command_runner.go:130] > # pids_limit = -1
	I1008 14:43:46.370667  118459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 14:43:46.370679  118459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 14:43:46.370693  118459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 14:43:46.370708  118459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 14:43:46.370717  118459 command_runner.go:130] > # log_size_max = -1
	I1008 14:43:46.370728  118459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 14:43:46.370735  118459 command_runner.go:130] > # log_to_journald = false
	I1008 14:43:46.370743  118459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 14:43:46.370755  118459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 14:43:46.370763  118459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 14:43:46.370774  118459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 14:43:46.370785  118459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 14:43:46.370794  118459 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 14:43:46.370804  118459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 14:43:46.370819  118459 command_runner.go:130] > # read_only = false
	I1008 14:43:46.370828  118459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 14:43:46.370841  118459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 14:43:46.370850  118459 command_runner.go:130] > # live configuration reload.
	I1008 14:43:46.370856  118459 command_runner.go:130] > # log_level = "info"
	I1008 14:43:46.370868  118459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 14:43:46.370884  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.370893  118459 command_runner.go:130] > # log_filter = ""
	I1008 14:43:46.370905  118459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370917  118459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 14:43:46.370923  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370934  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.370943  118459 command_runner.go:130] > # uid_mappings = ""
	I1008 14:43:46.370955  118459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370967  118459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 14:43:46.370979  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370994  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371003  118459 command_runner.go:130] > # gid_mappings = ""
	I1008 14:43:46.371012  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 14:43:46.371023  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371037  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371055  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371064  118459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 14:43:46.371076  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 14:43:46.371087  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371100  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371112  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371122  118459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 14:43:46.371134  118459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 14:43:46.371146  118459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 14:43:46.371158  118459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 14:43:46.371168  118459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 14:43:46.371179  118459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 14:43:46.371188  118459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 14:43:46.371193  118459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 14:43:46.371204  118459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 14:43:46.371214  118459 command_runner.go:130] > # drop_infra_ctr = true
	I1008 14:43:46.371224  118459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 14:43:46.371235  118459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 14:43:46.371249  118459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 14:43:46.371258  118459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 14:43:46.371276  118459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 14:43:46.371285  118459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 14:43:46.371294  118459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 14:43:46.371306  118459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 14:43:46.371316  118459 command_runner.go:130] > # shared_cpuset = ""
	I1008 14:43:46.371326  118459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 14:43:46.371337  118459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 14:43:46.371346  118459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 14:43:46.371358  118459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 14:43:46.371366  118459 command_runner.go:130] > # pinns_path = ""
	I1008 14:43:46.371374  118459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 14:43:46.371385  118459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 14:43:46.371395  118459 command_runner.go:130] > # enable_criu_support = true
	I1008 14:43:46.371405  118459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 14:43:46.371417  118459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 14:43:46.371422  118459 command_runner.go:130] > # enable_pod_events = false
	I1008 14:43:46.371434  118459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 14:43:46.371453  118459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 14:43:46.371465  118459 command_runner.go:130] > # default_runtime = "crun"
	I1008 14:43:46.371473  118459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 14:43:46.371484  118459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 14:43:46.371501  118459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 14:43:46.371511  118459 command_runner.go:130] > # creation as a file is not desired either.
	I1008 14:43:46.371526  118459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 14:43:46.371537  118459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 14:43:46.371545  118459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 14:43:46.371552  118459 command_runner.go:130] > # ]
	I1008 14:43:46.371559  118459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 14:43:46.371568  118459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 14:43:46.371574  118459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 14:43:46.371579  118459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 14:43:46.371584  118459 command_runner.go:130] > #
	I1008 14:43:46.371589  118459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 14:43:46.371595  118459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 14:43:46.371599  118459 command_runner.go:130] > # runtime_type = "oci"
	I1008 14:43:46.371606  118459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 14:43:46.371610  118459 command_runner.go:130] > # inherit_default_runtime = false
	I1008 14:43:46.371621  118459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 14:43:46.371628  118459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 14:43:46.371633  118459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 14:43:46.371639  118459 command_runner.go:130] > # monitor_env = []
	I1008 14:43:46.371643  118459 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 14:43:46.371649  118459 command_runner.go:130] > # allowed_annotations = []
	I1008 14:43:46.371654  118459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 14:43:46.371660  118459 command_runner.go:130] > # no_sync_log = false
	I1008 14:43:46.371664  118459 command_runner.go:130] > # default_annotations = {}
	I1008 14:43:46.371672  118459 command_runner.go:130] > # stream_websockets = false
	I1008 14:43:46.371676  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.371698  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.371705  118459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 14:43:46.371711  118459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 14:43:46.371719  118459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 14:43:46.371727  118459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 14:43:46.371731  118459 command_runner.go:130] > #   in $PATH.
	I1008 14:43:46.371736  118459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 14:43:46.371743  118459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 14:43:46.371748  118459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 14:43:46.371753  118459 command_runner.go:130] > #   state.
	I1008 14:43:46.371759  118459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 14:43:46.371767  118459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 14:43:46.371772  118459 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1008 14:43:46.371780  118459 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1008 14:43:46.371785  118459 command_runner.go:130] > #   the values from the default runtime on load time.
	I1008 14:43:46.371793  118459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 14:43:46.371801  118459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 14:43:46.371819  118459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 14:43:46.371827  118459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 14:43:46.371832  118459 command_runner.go:130] > #   The currently recognized values are:
	I1008 14:43:46.371840  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 14:43:46.371846  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 14:43:46.371854  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 14:43:46.371859  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 14:43:46.371869  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 14:43:46.371877  118459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 14:43:46.371885  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 14:43:46.371894  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 14:43:46.371900  118459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 14:43:46.371908  118459 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1008 14:43:46.371917  118459 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1008 14:43:46.371926  118459 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1008 14:43:46.371937  118459 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1008 14:43:46.371943  118459 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1008 14:43:46.371951  118459 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1008 14:43:46.371958  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1008 14:43:46.371966  118459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 14:43:46.371973  118459 command_runner.go:130] > #   deprecated option "conmon".
	I1008 14:43:46.371980  118459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 14:43:46.371987  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 14:43:46.371993  118459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 14:43:46.372000  118459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 14:43:46.372006  118459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1008 14:43:46.372013  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 14:43:46.372019  118459 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1008 14:43:46.372025  118459 command_runner.go:130] > #   conmon-rs by using:
	I1008 14:43:46.372032  118459 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1008 14:43:46.372041  118459 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1008 14:43:46.372050  118459 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1008 14:43:46.372060  118459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 14:43:46.372067  118459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 14:43:46.372073  118459 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1008 14:43:46.372083  118459 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1008 14:43:46.372090  118459 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1008 14:43:46.372097  118459 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1008 14:43:46.372107  118459 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1008 14:43:46.372116  118459 command_runner.go:130] > #   when a machine crash happens.
	I1008 14:43:46.372125  118459 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1008 14:43:46.372132  118459 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1008 14:43:46.372139  118459 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1008 14:43:46.372145  118459 command_runner.go:130] > #   seccomp profile for the runtime.
	I1008 14:43:46.372151  118459 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1008 14:43:46.372160  118459 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1008 14:43:46.372165  118459 command_runner.go:130] > #
	I1008 14:43:46.372170  118459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 14:43:46.372175  118459 command_runner.go:130] > #
	I1008 14:43:46.372181  118459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 14:43:46.372187  118459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 14:43:46.372192  118459 command_runner.go:130] > #
	I1008 14:43:46.372198  118459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 14:43:46.372205  118459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 14:43:46.372208  118459 command_runner.go:130] > #
	I1008 14:43:46.372214  118459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 14:43:46.372219  118459 command_runner.go:130] > # feature.
	I1008 14:43:46.372222  118459 command_runner.go:130] > #
	I1008 14:43:46.372228  118459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 14:43:46.372235  118459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 14:43:46.372242  118459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 14:43:46.372251  118459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 14:43:46.372259  118459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 14:43:46.372261  118459 command_runner.go:130] > #
	I1008 14:43:46.372267  118459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 14:43:46.372275  118459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 14:43:46.372281  118459 command_runner.go:130] > #
	I1008 14:43:46.372286  118459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 14:43:46.372294  118459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 14:43:46.372297  118459 command_runner.go:130] > #
	I1008 14:43:46.372302  118459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 14:43:46.372310  118459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 14:43:46.372314  118459 command_runner.go:130] > # limitation.
	I1008 14:43:46.372320  118459 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1008 14:43:46.372325  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1008 14:43:46.372330  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372334  118459 command_runner.go:130] > runtime_root = "/run/crun"
	I1008 14:43:46.372343  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372349  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372353  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372358  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372363  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372367  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372374  118459 command_runner.go:130] > allowed_annotations = [
	I1008 14:43:46.372380  118459 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1008 14:43:46.372384  118459 command_runner.go:130] > ]
	I1008 14:43:46.372391  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372395  118459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 14:43:46.372402  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1008 14:43:46.372406  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372411  118459 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 14:43:46.372415  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372422  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372425  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372432  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372436  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372453  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372461  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372473  118459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 14:43:46.372482  118459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 14:43:46.372491  118459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 14:43:46.372498  118459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 14:43:46.372509  118459 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1008 14:43:46.372520  118459 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1008 14:43:46.372530  118459 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1008 14:43:46.372537  118459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 14:43:46.372545  118459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 14:43:46.372555  118459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 14:43:46.372562  118459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 14:43:46.372569  118459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 14:43:46.372574  118459 command_runner.go:130] > # Example:
	I1008 14:43:46.372578  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 14:43:46.372585  118459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 14:43:46.372591  118459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 14:43:46.372602  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 14:43:46.372608  118459 command_runner.go:130] > # cpuset = "0-1"
	I1008 14:43:46.372612  118459 command_runner.go:130] > # cpushares = "5"
	I1008 14:43:46.372617  118459 command_runner.go:130] > # cpuquota = "1000"
	I1008 14:43:46.372621  118459 command_runner.go:130] > # cpuperiod = "100000"
	I1008 14:43:46.372626  118459 command_runner.go:130] > # cpulimit = "35"
	I1008 14:43:46.372630  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.372634  118459 command_runner.go:130] > # The workload name is workload-type.
	I1008 14:43:46.372643  118459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 14:43:46.372650  118459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 14:43:46.372655  118459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 14:43:46.372665  118459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 14:43:46.372682  118459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 14:43:46.372689  118459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 14:43:46.372695  118459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 14:43:46.372701  118459 command_runner.go:130] > # Default value is set to true
	I1008 14:43:46.372706  118459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 14:43:46.372713  118459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 14:43:46.372717  118459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 14:43:46.372724  118459 command_runner.go:130] > # Default value is set to 'false'
	I1008 14:43:46.372728  118459 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 14:43:46.372735  118459 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1008 14:43:46.372743  118459 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1008 14:43:46.372748  118459 command_runner.go:130] > # timezone = ""
	I1008 14:43:46.372756  118459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 14:43:46.372761  118459 command_runner.go:130] > #
	I1008 14:43:46.372767  118459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 14:43:46.372775  118459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1008 14:43:46.372781  118459 command_runner.go:130] > [crio.image]
	I1008 14:43:46.372786  118459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 14:43:46.372792  118459 command_runner.go:130] > # default_transport = "docker://"
	I1008 14:43:46.372798  118459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 14:43:46.372822  118459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372828  118459 command_runner.go:130] > # global_auth_file = ""
	I1008 14:43:46.372833  118459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 14:43:46.372840  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372844  118459 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.372853  118459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 14:43:46.372861  118459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372871  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372877  118459 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 14:43:46.372883  118459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 14:43:46.372888  118459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 14:43:46.372896  118459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 14:43:46.372902  118459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 14:43:46.372908  118459 command_runner.go:130] > # pause_command = "/pause"
	I1008 14:43:46.372914  118459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 14:43:46.372922  118459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 14:43:46.372927  118459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 14:43:46.372935  118459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 14:43:46.372940  118459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 14:43:46.372948  118459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 14:43:46.372952  118459 command_runner.go:130] > # pinned_images = [
	I1008 14:43:46.372958  118459 command_runner.go:130] > # ]
	I1008 14:43:46.372963  118459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 14:43:46.372972  118459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 14:43:46.372978  118459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 14:43:46.372986  118459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 14:43:46.372991  118459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 14:43:46.372997  118459 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1008 14:43:46.373003  118459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 14:43:46.373012  118459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 14:43:46.373021  118459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 14:43:46.373029  118459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 14:43:46.373034  118459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 14:43:46.373042  118459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 14:43:46.373051  118459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 14:43:46.373058  118459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 14:43:46.373065  118459 command_runner.go:130] > # changing them here.
	I1008 14:43:46.373070  118459 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1008 14:43:46.373076  118459 command_runner.go:130] > # insecure_registries = [
	I1008 14:43:46.373079  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373087  118459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 14:43:46.373095  118459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 14:43:46.373104  118459 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 14:43:46.373112  118459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 14:43:46.373116  118459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 14:43:46.373124  118459 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1008 14:43:46.373130  118459 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1008 14:43:46.373134  118459 command_runner.go:130] > # auto_reload_registries = false
	I1008 14:43:46.373142  118459 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1008 14:43:46.373149  118459 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1008 14:43:46.373157  118459 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1008 14:43:46.373162  118459 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1008 14:43:46.373168  118459 command_runner.go:130] > # The mode of short name resolution.
	I1008 14:43:46.373174  118459 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1008 14:43:46.373183  118459 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1008 14:43:46.373190  118459 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1008 14:43:46.373195  118459 command_runner.go:130] > # short_name_mode = "enforcing"
	I1008 14:43:46.373204  118459 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1008 14:43:46.373212  118459 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1008 14:43:46.373216  118459 command_runner.go:130] > # oci_artifact_mount_support = true
	I1008 14:43:46.373224  118459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 14:43:46.373228  118459 command_runner.go:130] > # CNI plugins.
	I1008 14:43:46.373234  118459 command_runner.go:130] > [crio.network]
	I1008 14:43:46.373239  118459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 14:43:46.373246  118459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 14:43:46.373251  118459 command_runner.go:130] > # cni_default_network = ""
	I1008 14:43:46.373259  118459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 14:43:46.373266  118459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 14:43:46.373271  118459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 14:43:46.373277  118459 command_runner.go:130] > # plugin_dirs = [
	I1008 14:43:46.373280  118459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 14:43:46.373284  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373289  118459 command_runner.go:130] > # List of included pod metrics.
	I1008 14:43:46.373295  118459 command_runner.go:130] > # included_pod_metrics = [
	I1008 14:43:46.373297  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373304  118459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 14:43:46.373310  118459 command_runner.go:130] > [crio.metrics]
	I1008 14:43:46.373314  118459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 14:43:46.373320  118459 command_runner.go:130] > # enable_metrics = false
	I1008 14:43:46.373324  118459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 14:43:46.373331  118459 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 14:43:46.373337  118459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 14:43:46.373347  118459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 14:43:46.373355  118459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 14:43:46.373359  118459 command_runner.go:130] > # metrics_collectors = [
	I1008 14:43:46.373364  118459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 14:43:46.373368  118459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 14:43:46.373371  118459 command_runner.go:130] > # 	"containers_oom_total",
	I1008 14:43:46.373374  118459 command_runner.go:130] > # 	"processes_defunct",
	I1008 14:43:46.373378  118459 command_runner.go:130] > # 	"operations_total",
	I1008 14:43:46.373381  118459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 14:43:46.373386  118459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 14:43:46.373389  118459 command_runner.go:130] > # 	"operations_errors_total",
	I1008 14:43:46.373393  118459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 14:43:46.373397  118459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 14:43:46.373400  118459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 14:43:46.373408  118459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 14:43:46.373411  118459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 14:43:46.373415  118459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 14:43:46.373422  118459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 14:43:46.373426  118459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 14:43:46.373430  118459 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1008 14:43:46.373436  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373450  118459 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1008 14:43:46.373460  118459 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1008 14:43:46.373468  118459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 14:43:46.373475  118459 command_runner.go:130] > # metrics_port = 9090
	I1008 14:43:46.373480  118459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 14:43:46.373486  118459 command_runner.go:130] > # metrics_socket = ""
	I1008 14:43:46.373490  118459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 14:43:46.373499  118459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 14:43:46.373508  118459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 14:43:46.373514  118459 command_runner.go:130] > # certificate on any modification event.
	I1008 14:43:46.373518  118459 command_runner.go:130] > # metrics_cert = ""
	I1008 14:43:46.373525  118459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 14:43:46.373530  118459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 14:43:46.373536  118459 command_runner.go:130] > # metrics_key = ""
	I1008 14:43:46.373542  118459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 14:43:46.373548  118459 command_runner.go:130] > [crio.tracing]
	I1008 14:43:46.373554  118459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 14:43:46.373564  118459 command_runner.go:130] > # enable_tracing = false
	I1008 14:43:46.373571  118459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 14:43:46.373576  118459 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1008 14:43:46.373584  118459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 14:43:46.373591  118459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 14:43:46.373598  118459 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 14:43:46.373604  118459 command_runner.go:130] > [crio.nri]
	I1008 14:43:46.373608  118459 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 14:43:46.373614  118459 command_runner.go:130] > # enable_nri = true
	I1008 14:43:46.373618  118459 command_runner.go:130] > # NRI socket to listen on.
	I1008 14:43:46.373624  118459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 14:43:46.373628  118459 command_runner.go:130] > # NRI plugin directory to use.
	I1008 14:43:46.373635  118459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 14:43:46.373640  118459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 14:43:46.373647  118459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 14:43:46.373653  118459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 14:43:46.373688  118459 command_runner.go:130] > # nri_disable_connections = false
	I1008 14:43:46.373696  118459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 14:43:46.373701  118459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 14:43:46.373705  118459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 14:43:46.373712  118459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 14:43:46.373717  118459 command_runner.go:130] > # NRI default validator configuration.
	I1008 14:43:46.373725  118459 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1008 14:43:46.373733  118459 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1008 14:43:46.373737  118459 command_runner.go:130] > # can be restricted/rejected:
	I1008 14:43:46.373743  118459 command_runner.go:130] > # - OCI hook injection
	I1008 14:43:46.373748  118459 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1008 14:43:46.373755  118459 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1008 14:43:46.373760  118459 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1008 14:43:46.373766  118459 command_runner.go:130] > # - adjustment of linux namespaces
	I1008 14:43:46.373772  118459 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1008 14:43:46.373780  118459 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1008 14:43:46.373788  118459 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1008 14:43:46.373791  118459 command_runner.go:130] > #
	I1008 14:43:46.373795  118459 command_runner.go:130] > # [crio.nri.default_validator]
	I1008 14:43:46.373802  118459 command_runner.go:130] > # nri_enable_default_validator = false
	I1008 14:43:46.373811  118459 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1008 14:43:46.373819  118459 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1008 14:43:46.373827  118459 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1008 14:43:46.373832  118459 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1008 14:43:46.373839  118459 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1008 14:43:46.373843  118459 command_runner.go:130] > # nri_validator_required_plugins = [
	I1008 14:43:46.373848  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373853  118459 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1008 14:43:46.373861  118459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 14:43:46.373865  118459 command_runner.go:130] > [crio.stats]
	I1008 14:43:46.373873  118459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 14:43:46.373880  118459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 14:43:46.373887  118459 command_runner.go:130] > # stats_collection_period = 0
	I1008 14:43:46.373892  118459 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1008 14:43:46.373900  118459 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1008 14:43:46.373907  118459 command_runner.go:130] > # collection_period = 0
	I1008 14:43:46.373928  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353034685Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1008 14:43:46.373938  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353062648Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1008 14:43:46.373948  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.35308236Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1008 14:43:46.373956  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353100078Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1008 14:43:46.373967  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353161884Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:46.373976  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353351718Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1008 14:43:46.373988  118459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 14:43:46.374064  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:46.374077  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:46.374093  118459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:43:46.374116  118459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:43:46.374237  118459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:43:46.374300  118459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:43:46.382363  118459 command_runner.go:130] > kubeadm
	I1008 14:43:46.382384  118459 command_runner.go:130] > kubectl
	I1008 14:43:46.382389  118459 command_runner.go:130] > kubelet
	I1008 14:43:46.382411  118459 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:43:46.382482  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:43:46.390162  118459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:43:46.403097  118459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:43:46.415613  118459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:43:46.428192  118459 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:43:46.432007  118459 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1008 14:43:46.432080  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.522533  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:46.535801  118459 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:43:46.535827  118459 certs.go:195] generating shared ca certs ...
	I1008 14:43:46.535849  118459 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:46.536002  118459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:43:46.536048  118459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:43:46.536069  118459 certs.go:257] generating profile certs ...
	I1008 14:43:46.536190  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:43:46.536242  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:43:46.536277  118459 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:43:46.536291  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:43:46.536306  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:43:46.536318  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:43:46.536330  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:43:46.536342  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:43:46.536377  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:43:46.536393  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:43:46.536405  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:43:46.536476  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:43:46.536513  118459 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:43:46.536523  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:43:46.536550  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:43:46.536574  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:43:46.536595  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:43:46.536635  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:46.536660  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.536675  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.536688  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.537241  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:43:46.555642  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:43:46.572819  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:43:46.590661  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:43:46.607931  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:43:46.625383  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:43:46.642336  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:43:46.659419  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:43:46.676486  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:43:46.693083  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:43:46.710326  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:43:46.727941  118459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:43:46.740780  118459 ssh_runner.go:195] Run: openssl version
	I1008 14:43:46.747268  118459 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1008 14:43:46.747351  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:43:46.756220  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760077  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760121  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760189  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.794493  118459 command_runner.go:130] > 3ec20f2e
	I1008 14:43:46.794726  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:43:46.803126  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:43:46.811855  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815648  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815718  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815789  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.849403  118459 command_runner.go:130] > b5213941
	I1008 14:43:46.849676  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:43:46.857958  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:43:46.866212  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869736  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869766  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869798  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.904128  118459 command_runner.go:130] > 51391683
	I1008 14:43:46.904402  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:43:46.913326  118459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917356  118459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917385  118459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 14:43:46.917396  118459 command_runner.go:130] > Device: 8,1	Inode: 591874      Links: 1
	I1008 14:43:46.917405  118459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.917413  118459 command_runner.go:130] > Access: 2025-10-08 14:39:39.676864991 +0000
	I1008 14:43:46.917418  118459 command_runner.go:130] > Modify: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917426  118459 command_runner.go:130] > Change: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917431  118459 command_runner.go:130] >  Birth: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917505  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:43:46.951955  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.952157  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:43:46.986574  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.986789  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:43:47.021180  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.021253  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:43:47.054995  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.055238  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:43:47.088666  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.089049  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:43:47.123893  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.124156  118459 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:47.124254  118459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:43:47.124313  118459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:43:47.152244  118459 cri.go:89] found id: ""
	I1008 14:43:47.152307  118459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:43:47.160274  118459 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1008 14:43:47.160294  118459 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1008 14:43:47.160299  118459 command_runner.go:130] > /var/lib/minikube/etcd:
	I1008 14:43:47.160318  118459 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:43:47.160325  118459 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:43:47.160370  118459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:43:47.167663  118459 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:43:47.167758  118459 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.167803  118459 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "functional-367186" cluster setting kubeconfig missing "functional-367186" context setting]
	I1008 14:43:47.168217  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.169051  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.169269  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.170001  118459 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:43:47.170034  118459 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:43:47.170046  118459 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:43:47.170052  118459 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:43:47.170058  118459 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:43:47.170055  118459 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:43:47.170535  118459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:43:47.177804  118459 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 14:43:47.177829  118459 kubeadm.go:601] duration metric: took 17.498385ms to restartPrimaryControlPlane
	I1008 14:43:47.177836  118459 kubeadm.go:402] duration metric: took 53.689897ms to StartCluster
	I1008 14:43:47.177851  118459 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.177960  118459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.178692  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.178964  118459 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:43:47.179000  118459 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:43:47.179182  118459 addons.go:69] Setting storage-provisioner=true in profile "functional-367186"
	I1008 14:43:47.179161  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:47.179199  118459 addons.go:238] Setting addon storage-provisioner=true in "functional-367186"
	I1008 14:43:47.179280  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.179202  118459 addons.go:69] Setting default-storageclass=true in profile "functional-367186"
	I1008 14:43:47.179355  118459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-367186"
	I1008 14:43:47.179643  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.179723  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.181696  118459 out.go:179] * Verifying Kubernetes components...
	I1008 14:43:47.182986  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:47.197887  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.198131  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.198516  118459 addons.go:238] Setting addon default-storageclass=true in "functional-367186"
	I1008 14:43:47.198560  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.198956  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.199610  118459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:43:47.201208  118459 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.201228  118459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:43:47.201280  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.224257  118459 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.224285  118459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:43:47.224346  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.226258  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.244099  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.285014  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:47.298345  118459 node_ready.go:35] waiting up to 6m0s for node "functional-367186" to be "Ready" ...
	I1008 14:43:47.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.298934  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:47.336898  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.352323  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.393808  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.393854  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.393886  118459 retry.go:31] will retry after 231.755958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407397  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.407475  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407496  118459 retry.go:31] will retry after 329.539024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.626786  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.679746  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.679800  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.679850  118459 retry.go:31] will retry after 393.16896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.738034  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.790656  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.792936  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.792970  118459 retry.go:31] will retry after 318.025551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.799129  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.799197  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.073934  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.111484  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.127850  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.127921  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.127943  118459 retry.go:31] will retry after 836.309595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.162277  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.164855  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.164886  118459 retry.go:31] will retry after 780.910281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.299211  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.299650  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.799557  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.799964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.946262  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.964996  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.998239  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.000519  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.000554  118459 retry.go:31] will retry after 896.283262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.018974  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.019036  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.019061  118459 retry.go:31] will retry after 1.078166751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.299460  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.299536  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.299868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:49.299950  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:49.799616  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.799720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.800392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:49.897595  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:49.950387  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.950427  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.950463  118459 retry.go:31] will retry after 1.484279714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.097767  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:50.149377  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:50.149421  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.149465  118459 retry.go:31] will retry after 1.600335715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.298625  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:50.798695  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.798808  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.799174  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.298904  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.435639  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:51.489347  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.491876  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.491909  118459 retry.go:31] will retry after 2.200481753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.750291  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:51.799001  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.799398  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:51.799489  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:51.803486  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.803590  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.803616  118459 retry.go:31] will retry after 2.262800355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:52.299098  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.299177  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.299542  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:52.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.799399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.799764  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.298621  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.299048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.692777  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:53.745144  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:53.745204  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.745229  118459 retry.go:31] will retry after 3.527117876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.799392  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.799480  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.799857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:53.799918  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:54.067271  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:54.118417  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:54.118478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.118503  118459 retry.go:31] will retry after 3.862999365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.298755  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.298838  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.299219  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:54.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.799074  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.298863  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.298942  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.299253  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.798989  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.799066  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.799421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:56.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:56.299793  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:56.799548  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.799947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.272978  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:57.298541  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.298620  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.298918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.321958  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:57.324558  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.324587  118459 retry.go:31] will retry after 4.383767223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.799184  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.799301  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.799689  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.982062  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:58.032702  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:58.035195  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.035237  118459 retry.go:31] will retry after 5.903970239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:58.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:58.799473  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:59.298999  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.299078  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.299479  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:59.799062  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.799145  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.299550  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.799200  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.799275  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.799625  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:00.799685  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:01.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.299385  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.299774  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:01.709356  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:01.759088  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:01.761882  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.761921  118459 retry.go:31] will retry after 6.257319935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.799124  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.799237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.299268  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.299716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.799390  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.799502  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.799880  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:02.799960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:03.299492  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.299563  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.299925  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.798665  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.798754  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.940379  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:03.990275  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:03.993084  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:03.993122  118459 retry.go:31] will retry after 4.028920288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:04.298653  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.299341  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:04.798956  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.799033  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:05.299051  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.299176  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.299598  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:05.299657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:05.799285  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.799356  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.799725  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.299393  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.299841  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.799593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.799944  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.299053  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.798714  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.798786  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.799261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:07.799325  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:08.019559  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:08.023109  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:08.072023  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.074947  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074963  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074982  118459 retry.go:31] will retry after 6.922745297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.076401  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.076428  118459 retry.go:31] will retry after 5.441570095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.298802  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.299153  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:08.799104  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.799539  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.299229  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.299310  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.299686  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.799379  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.799472  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.799807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:09.799869  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:10.299531  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.299603  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.299958  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:10.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.799011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.298647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.299123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.798895  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.799225  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:12.298842  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.298915  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:12.299310  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:12.798893  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.299008  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.518328  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:13.572977  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:13.573020  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.573038  118459 retry.go:31] will retry after 15.052611026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.798632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.798973  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.298894  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.299223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.798866  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.798962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:14.799351  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:14.998673  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:15.051035  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:15.051092  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.051116  118459 retry.go:31] will retry after 7.550335313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.299491  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.299568  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:15.799546  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.799646  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.800035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.298586  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.299006  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:17.298969  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.299043  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:17.299467  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:17.798964  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.299415  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.799349  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.799698  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:19.299431  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.299558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.299972  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:19.300047  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:19.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.299042  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.798691  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.798998  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.298572  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.298698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.299121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:21.799149  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:22.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:22.602557  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:22.653552  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:22.656108  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.656138  118459 retry.go:31] will retry after 31.201355729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.799459  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.799558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.799901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.299026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.798988  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.799061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:23.799539  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:24.299048  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.299131  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.299558  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:24.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.799285  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.799622  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.299437  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.299594  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.299994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.799056  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:26.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.298737  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.299066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:26.299138  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:26.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.799032  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.298934  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.299032  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.798977  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:28.298998  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.299130  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.299524  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:28.299599  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:28.625918  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:28.675593  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:28.678080  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.678122  118459 retry.go:31] will retry after 23.952219527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.799477  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.799570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.799970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.298589  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.298685  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.798713  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.798787  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.799221  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.298792  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.299229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.798891  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.799335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:30.799398  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:31.298936  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.299373  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:31.798930  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.799039  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.299072  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.799097  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.799529  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:32.799596  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:33.299230  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.299325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.299740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:33.798515  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.798587  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.798936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.299656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.798590  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.798664  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.799020  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:35.298588  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.298666  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.299052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:35.299143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:35.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.299007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.798626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:37.298948  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.299051  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:37.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:37.799006  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.799086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.799417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.299020  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.299100  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.299469  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.799369  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.799927  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:39.299580  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.299693  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.300082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:39.300150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:39.798611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.799046  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.298592  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.298670  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.798637  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.299138  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.798729  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.798815  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.799152  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:41.799215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:42.298723  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.298799  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.299170  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:42.798731  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.798836  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.799203  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.298908  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.299278  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.799167  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.799250  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:43.799661  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:44.299314  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.299416  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.299827  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:44.799577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.799657  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.800048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.298599  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.299047  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:46.298671  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.299126  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:46.299191  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:46.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.798850  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.799223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.299119  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.299231  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.299611  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.799336  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.799765  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:48.299501  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.299582  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.299947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:48.300006  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:48.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.798729  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.298752  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.798901  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.798982  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.298921  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.299003  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.798955  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.799416  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:50.799534  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:51.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.299214  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.299601  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:51.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.799388  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.799753  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.299413  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.299503  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.299839  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.631482  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:52.682310  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:52.684872  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.684901  118459 retry.go:31] will retry after 32.790446037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.799279  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.799368  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.799719  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:52.799778  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:53.299429  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.299873  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.799081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.858347  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:53.912029  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:53.912083  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:53.912107  118459 retry.go:31] will retry after 18.370397631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:54.298601  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:54.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.799095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:55.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.299226  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:55.299302  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:55.798903  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.798996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.298927  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.299347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:57.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.299509  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:57.299581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:57.799169  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.799283  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.299318  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.299391  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.299772  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.799563  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.799658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.800017  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.298677  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.299050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.798757  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:59.799217  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:00.298721  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.298821  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:00.798884  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.799337  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.298871  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.298949  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.299314  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.798878  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.799285  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:01.799345  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:02.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.299353  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:02.798928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.799012  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.799359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.298939  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.299014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.799249  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:03.799744  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:04.299367  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.299468  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.299800  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:04.799513  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.799614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.798722  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.799201  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:06.298786  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.298890  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.299232  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:06.299292  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:06.798807  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.798900  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.799230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.299263  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.299613  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.799343  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.799420  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.799763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:08.299428  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.299527  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.299872  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:08.299937  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:08.798593  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.798667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.799001  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.298582  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.798617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.798698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.298622  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.799101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:10.799164  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:11.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:11.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.282739  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:45:12.299378  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.299488  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.299877  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.333950  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336622  118459 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:12.799135  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.799209  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:12.799657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:13.299289  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.299709  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:13.798861  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.798943  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.298849  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.298932  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.299258  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.799040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:15.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.299098  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:15.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:15.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.799155  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.799530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.299229  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.299576  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.799320  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.799402  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.799740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.298566  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:17.799082  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:18.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.298700  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:18.798851  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.798935  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.298852  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.299298  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.798906  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.798988  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.799347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:19.799406  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:20.298933  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.299355  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:20.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.799025  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.799390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.298968  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.299041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.799011  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.799369  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:22.299008  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.299101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.299519  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:22.299580  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:22.799213  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.799289  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.299390  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.299767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.799544  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.799617  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.799951  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.298561  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.298641  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.798607  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.799048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:24.799112  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:25.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:25.476423  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:45:25.531081  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531142  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531259  118459 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:25.534376  118459 out.go:179] * Enabled addons: 
	I1008 14:45:25.535655  118459 addons.go:514] duration metric: took 1m38.356657385s for enable addons: enabled=[]
	I1008 14:45:25.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.798640  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.798959  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.298537  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.299011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.798610  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.798686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:26.799185  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:27.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.299111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:27.799210  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.799306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.799715  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.299395  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.299520  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.299905  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.798594  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:29.298630  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:29.299127  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:29.798717  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.798816  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.799196  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.299218  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.798893  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.799252  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:31.298834  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.299230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:31.299294  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:31.798829  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.798912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.799264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.298806  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.299262  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.799271  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:33.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.298966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.299345  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:33.299417  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:33.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.799654  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.299321  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.299423  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.299763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.799422  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.799533  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.799902  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.298559  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.298639  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.798592  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:35.799128  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:36.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.299156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:36.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.798779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.799148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.299530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:37.799713  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:38.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.299405  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.299766  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:38.799558  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.799667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.800040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.298689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.798644  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.799106  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:40.298658  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.299095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:40.299169  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:40.798657  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.799078  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.298629  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.798741  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.799102  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:42.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.299168  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:42.299237  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:42.798716  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.798788  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.298801  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.799130  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.799591  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:44.299252  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.299339  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.299712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:44.299773  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:44.799365  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.799825  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.299172  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.299287  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.299676  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.799167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.298781  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.298881  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.299294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.798856  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.798931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.799293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:46.799356  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:47.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.299246  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:47.799327  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.799406  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.299439  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.299542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.299919  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.798704  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:49.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:49.299162  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:49.798684  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.799141  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.298714  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.298795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.299144  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.798776  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.798853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.799207  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:51.298712  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.298791  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.299166  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:51.299231  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:51.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.798829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.799189  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.298885  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.299246  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.799319  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.298699  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.298776  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.299137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.799143  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.799505  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:53.799579  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:54.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.299276  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.299636  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:54.799331  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.799784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.299472  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.798585  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.798665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:56.298627  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:56.299148  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:56.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.799077  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.299523  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.799274  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.799642  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:58.299356  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.299473  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.299961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:58.300023  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:58.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.799059  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.298721  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.798755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.798766  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.798873  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.799228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:00.799293  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:01.298587  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.299023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:01.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.798731  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.799123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.298698  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.799202  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:03.298750  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.298833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:03.299244  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:03.799037  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.799122  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.799491  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.299167  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.299249  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.299630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.799414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.799795  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:05.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.299956  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:05.300019  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:05.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.298578  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.799117  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.299118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.299493  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.799139  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.799496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:07.799569  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:08.299035  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.299126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:08.799377  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.799812  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.298529  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.298607  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.298931  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.799111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:10.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.299130  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:10.299230  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:10.798708  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.798795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.298650  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.298984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.798571  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.798994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.299013  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.798609  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.799038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:12.799099  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:13.298602  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:13.798949  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.799028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.799365  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.299036  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.299417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.798995  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:14.799507  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:15.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:15.798739  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.299195  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.798747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.799211  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:17.299171  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.299252  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.299620  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:17.299687  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:17.799351  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.799429  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.799815  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.299581  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.299663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.300026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.798911  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.798995  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.799361  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.299017  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.798976  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.799059  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:19.799484  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:20.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.299063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.299433  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:20.799000  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.799073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.799422  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.299052  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.798986  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.799475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:21.799540  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:22.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.299073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.299421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:22.799016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.799089  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.299012  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.299086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.799352  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.799434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.799781  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:23.799842  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:24.299407  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.299843  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:24.799556  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.799961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.298635  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.298981  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.799082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:26.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.299076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:26.299150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:26.798664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.298937  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.299013  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.299343  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.798999  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:28.298903  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.298998  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.299342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:28.299409  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:28.799216  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.799293  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.299414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.299824  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.799545  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.298574  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.298654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.299010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.799063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:30.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:31.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.299084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:31.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.799089  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.298660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.798689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.798772  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.799169  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:32.799234  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:33.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:33.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.799101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.299040  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.299520  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.799151  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.799224  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.799552  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:34.799606  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:35.299196  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.299279  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:35.799293  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.799369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.799727  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.299400  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.299857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.799528  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.799601  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:36.799998  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:37.298659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.299094  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:37.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.798758  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.799112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.298715  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.298793  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.299167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.799005  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.799470  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:39.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.299482  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:39.299547  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:39.799057  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.799149  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.299162  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.299239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.299588  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.799254  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.799325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.799695  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:41.299348  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.299424  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.299798  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:41.299888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:41.799486  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.799571  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.799908  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.299014  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.798601  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.799021  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.298597  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.298675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.299015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.798718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:43.799158  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:44.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.299079  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:44.798646  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.298651  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.298724  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.798658  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:45.799190  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:46.298664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.298740  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.299081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:46.798660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.299010  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.299116  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.299468  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.799515  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:47.799577  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:48.299145  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.299237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.299586  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:48.799465  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.799540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.799893  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.299567  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.300081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.798774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.799156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:50.298747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.298852  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:50.299334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:50.798849  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.798940  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.799370  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.298974  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.299474  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.799088  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.799617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:52.299319  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.299399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.299750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:52.299815  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:52.799425  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.799532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.799968  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.298596  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.299057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.798951  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.799031  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.799358  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.298997  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.299141  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.299485  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.799052  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:54.799557  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:55.299016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.299471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:55.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.799427  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.299476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.799071  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:57.299385  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.299507  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.299911  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:57.299974  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:57.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.799954  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.298614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.298971  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.798638  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.798717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.298676  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.299184  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.798757  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.798865  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.799194  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:59.799261  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:00.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.299242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:00.798799  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.798882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.298869  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.298960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.299308  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.798868  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.798957  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:01.799395  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:02.298910  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.299004  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.299367  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:02.798967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.799471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.299109  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.799358  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.799437  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.799820  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:03.799888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:04.299467  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.299570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:04.798525  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.798605  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.798957  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.299064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:06.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.298755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.299139  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:06.299201  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:06.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.798775  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.799212  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.299173  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.299680  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.799348  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.799431  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.799818  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:08.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.299559  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.299887  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:08.299953  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:08.798622  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.298666  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.298743  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.299110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.798767  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.298823  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.299192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.799192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:10.799264  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:11.298772  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.298854  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.299193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:11.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.798887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.799274  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.298832  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.298912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.299277  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.798808  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.798896  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.799275  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:12.799334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:13.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.298906  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:13.799086  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.799171  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.799549  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.299233  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.299317  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.299685  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.799321  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.799395  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.799748  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:14.799845  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:15.299364  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.299434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.299756  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:15.799417  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.799861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.299614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.299915  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.798573  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.799007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:17.298827  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.299306  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:17.299381  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:17.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.798968  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.799302  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.298694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.799418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:19.299079  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.299153  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.299571  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:19.299630  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:19.799185  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.799262  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.799651  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.299313  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.299398  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.299801  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.800024  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:21.799168  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:22.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.298730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:22.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.798732  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.298704  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.298779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.299115  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.798943  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.799042  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:23.799509  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:24.298964  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.299040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.299390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:24.798583  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.798690  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.298624  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.299069  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.798756  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:26.298675  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:26.299192  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:26.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.799142  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.299005  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.299090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.299419  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.799045  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.799137  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.799544  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:28.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.299617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:28.299678  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:28.799473  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.799560  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.799899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.299985  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.798622  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.798983  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.298553  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.298632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.298995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.798697  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:30.799179  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:31.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.298695  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.299073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:31.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.298977  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.798588  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.798663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.799041  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:33.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:33.299097  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:33.798957  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.299095  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.299494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:35.299241  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:35.299795  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:35.799437  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.799530  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.799892  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.299548  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.798599  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.798674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.298967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.299050  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.299424  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.799403  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:37.799496  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:38.298988  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.299067  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.299408  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:38.799345  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.799481  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.799859  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.299510  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.299593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.299976  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:40.298711  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.298796  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:40.299245  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:40.798752  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.798837  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.799193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.298853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.299237  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.798946  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.799303  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:42.298889  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.298962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.299322  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:42.299384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:42.798944  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.298977  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.299047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.299368  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.799221  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.799302  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.799663  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:44.299294  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.299790  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:44.299872  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:44.799433  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.799542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.799888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.299563  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.299636  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.299993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:46.299512  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.299633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.300025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:46.300089  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:46.798790  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.798884  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.799229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.299087  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.299184  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.299563  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.798932  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.799009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.799428  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.299029  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.299106  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.299501  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.799380  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.799486  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.799833  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:48.799903  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:49.299564  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.300007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:49.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.799052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:51.298640  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.299093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:51.299156  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:51.798681  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.798761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.799132  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.298710  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.298829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.798883  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.799265  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:53.298856  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.298931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:53.299362  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:53.799190  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.799266  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.299296  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.799472  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.799553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.799952  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.298584  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.298660  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.798627  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.798713  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:55.799173  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:56.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.298834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:56.798788  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.798866  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.799242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.299122  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.299496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.799239  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.799714  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:57.799774  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:58.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.299464  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.299809  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:58.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.798672  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.799025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.298591  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.298674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.798618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.798694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.799057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:00.298633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:00.299182  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:00.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.799076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.298687  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.298762  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.299124  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.798694  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.798782  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.799125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.298730  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.298807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.299143  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:02.799242  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:03.298766  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.299191  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:03.799090  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.799168  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.799556  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.798656  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:05.298725  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.298803  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.299148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:05.299215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:05.798756  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.798859  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.298856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.299228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.799046  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.799394  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:07.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.299273  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:07.299732  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:07.799538  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.799609  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.799950  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.299147  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.799521  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:09.299345  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.299428  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.299805  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:09.299871  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:09.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.298815  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.298898  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.799063  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.799142  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.799548  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:11.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.299512  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.299861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:11.299938  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:11.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.298858  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.298934  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.298773  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.298847  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.799118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.799495  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:13.799564  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:14.299338  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.299418  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.299784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:14.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.798633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.798966  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.299111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.798836  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:16.299034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.299119  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.299472  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:16.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:16.799263  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.799716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.299984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.799093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.298690  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.298768  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.299127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.798926  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.799002  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:18.799405  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:19.298954  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.299028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.299371  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:19.798980  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.299425  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.798994  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.799140  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.799508  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:20.799581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:21.299202  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.299281  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.299656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:21.799334  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.799412  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.799779  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.299478  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.299564  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.798566  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.798990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:23.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.298653  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:23.299069  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:23.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.799024  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.298958  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.299387  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.799037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:25.299272  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.299346  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:25.299785  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:25.799564  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.799644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.800010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.298851  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.299197  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.798945  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.799020  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:27.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.299762  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:27.299828  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:27.799408  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.799498  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.799868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.299505  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.299589  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.299938  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.798710  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.799066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.298603  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.299072  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.799067  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:29.799143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:30.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.298723  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:30.798639  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.798719  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.298623  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:32.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.299071  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:32.299152  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:32.798666  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.798747  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.799135  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.298695  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.798993  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.799069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:34.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.299476  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.299807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:34.299873  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:34.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.798675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.298918  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.299259  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.799014  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.299386  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.299754  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.798548  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.798627  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:36.799056  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:37.298853  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.298929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.299261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:37.798581  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.298605  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.799034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:38.799603  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:39.299424  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.299514  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.299862  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:39.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.799092  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.298907  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.298997  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.299335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.799204  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.799649  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:40.799728  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:41.299541  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.299632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.299970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:41.798741  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.798831  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.799187  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.298986  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.299069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.299473  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.799301  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.799376  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.799728  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:42.799794  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:43.298557  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.298631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.299030  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:43.798919  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.799001  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.799377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.299220  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.299306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.299666  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.799308  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.799379  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.799750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:45.299391  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.299504  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.299837  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:45.299906  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:45.799476  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.799562  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.799953  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.298535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.298610  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.298988  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.798683  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.799014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:47.799500  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:48.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.299084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.299436  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:48.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.799397  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.799757  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.299469  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.299546  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.798748  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.799121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:50.298729  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.298811  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.299173  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:50.299238  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:50.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.798856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.799248  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.298812  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.298897  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.798948  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:52.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.299070  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:52.299545  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:52.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.799504  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.299161  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.299264  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.299675  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.799435  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.799534  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.799875  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.298718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.299112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.798929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.799294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:54.799357  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:55.299157  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.299235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.299606  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:55.799386  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.799470  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.799852  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.299065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.798779  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.798868  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.799243  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:57.299138  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.299227  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.299600  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:57.299666  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:57.799470  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.799545  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.799918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.298679  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.298761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.299149  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.799015  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.799090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:59.299293  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.299392  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.299742  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:59.299808  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:59.798577  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.299326  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.799153  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:01.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.299553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.299898  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:01.299965  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:01.798701  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.298874  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.299315  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.799145  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.799228  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.799568  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.299513  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.798557  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.799073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:03.799140  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:04.298885  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.298976  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.299401  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:04.799261  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.799710  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.299549  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.299642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.300048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.798774  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.798849  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.799206  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:05.799268  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:06.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.299053  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:06.799240  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.799328  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.799681  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.299414  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.299532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.799044  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:08.298825  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:08.299350  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:08.799137  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.799221  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.799589  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.299540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.299921  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.799064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:10.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.298925  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.299313  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:10.299380  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:10.799149  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.799223  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.799572  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.299419  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.299531  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.299928  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.798698  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.798777  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.799140  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:12.298875  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.299357  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:12.299428  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:12.799215  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.799641  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.299434  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.299538  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.299901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.798658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.798993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.298718  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.298806  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.299190  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.798984  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.799423  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:14.799511  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:15.299254  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.299343  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:15.798574  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.798655  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.298700  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.298800  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.299145  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.799300  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:17.299095  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.299193  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.299535  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:17.299597  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:17.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.799337  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.299759  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.799524  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.799598  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:19.299552  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.299638  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:19.300058  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:19.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.299002  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.798789  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.298846  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.298952  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.299301  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.799159  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.799239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.799630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:21.799697  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:22.299522  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.299619  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.299991  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:22.798758  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.798834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.799181  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.299061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.299437  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.799357  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.799433  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.799786  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:23.799850  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:24.298547  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:24.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.798835  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.799161  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.298901  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.298996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.299334  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.799154  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.799236  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.799604  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:26.299399  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.299521  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.299888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:26.299960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:26.798629  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.799035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.298805  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.298901  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.299256  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.798972  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.799378  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.299186  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.799616  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.800091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:28.800170  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:29.298943  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.299021  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.299362  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:29.799176  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.799282  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.299485  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.299566  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.299899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.798586  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:31.298771  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.299157  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:31.299210  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:31.798882  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.798989  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.299195  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.299278  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.299631  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.799405  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.799515  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.799866  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.298635  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.798843  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.798922  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.799266  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:33.799342  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:34.299019  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.299432  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:34.799270  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.799358  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.799712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.299543  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.299995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.798712  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.798807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.799171  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:36.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.298739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:36.299199  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:36.798682  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.299039  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.299475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.799319  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.799403  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.298633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.298999  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.799060  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:38.799123  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:39.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.298919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:39.799162  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.799585  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.299409  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.299508  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.299869  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.799084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:40.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:41.298831  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.298921  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:41.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.299467  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.299819  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.798568  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.798643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.798984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:43.298738  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.298822  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:43.299318  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:43.799035  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.799483  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.299382  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.299773  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.798575  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.799012  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.298748  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.298824  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.299159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.798886  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.798960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.799321  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:45.799384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:46.299022  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.299330  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:46.798742  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.798830  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.799234  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:47.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:49:47.299208  118459 node_ready.go:38] duration metric: took 6m0.000826952s for node "functional-367186" to be "Ready" ...
	I1008 14:49:47.302039  118459 out.go:203] 
	W1008 14:49:47.303804  118459 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 14:49:47.303820  118459 out.go:285] * 
	W1008 14:49:47.305511  118459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:49:47.306606  118459 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 14:49:40 functional-367186 crio[2943]: time="2025-10-08T14:49:40.462892455Z" level=info msg="createCtr: removing container 8651f476039be7edc94ef50784c528612ba9c7504c2e7a8ee289820d1780bb48" id=aa6cd264-7360-4f24-a9ec-be4053570fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:40 functional-367186 crio[2943]: time="2025-10-08T14:49:40.462919806Z" level=info msg="createCtr: deleting container 8651f476039be7edc94ef50784c528612ba9c7504c2e7a8ee289820d1780bb48 from storage" id=aa6cd264-7360-4f24-a9ec-be4053570fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:40 functional-367186 crio[2943]: time="2025-10-08T14:49:40.465060835Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-367186_kube-system_c58427f58fdd58b4fdb4fadaedd99fdb_0" id=aa6cd264-7360-4f24-a9ec-be4053570fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.436638949Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a4632e23-5922-462a-a3da-a900330698c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.437472378Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=47ae67a6-9e88-4255-8bae-b89ffdfc7dfe name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.438306148Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.438529725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.441687675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.442240801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.464500429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.465884187Z" level=info msg="createCtr: deleting container ID 4de22756f9b5388c90e04889e02afb0fb4239a79f7d3dd3054855889e675334f from idIndex" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.465930829Z" level=info msg="createCtr: removing container 4de22756f9b5388c90e04889e02afb0fb4239a79f7d3dd3054855889e675334f" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.465963795Z" level=info msg="createCtr: deleting container 4de22756f9b5388c90e04889e02afb0fb4239a79f7d3dd3054855889e675334f from storage" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:44 functional-367186 crio[2943]: time="2025-10-08T14:49:44.468045769Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=c7980495-76a7-45c5-b4f8-ee77f0e26bf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.436890997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a8969f44-0f4e-4c5c-955a-6ae3ad79f3a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.437871883Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4b99943c-c84c-4270-9a6b-a336ea2755ae name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.440800008Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.441097787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.444672553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.445085701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.460021036Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.461676485Z" level=info msg="createCtr: deleting container ID d5911b14bcb6c6aefc1a913b29c52db4c43b0697dba39c99c3f1c55cb1abf37f from idIndex" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.461723396Z" level=info msg="createCtr: removing container d5911b14bcb6c6aefc1a913b29c52db4c43b0697dba39c99c3f1c55cb1abf37f" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.461764456Z" level=info msg="createCtr: deleting container d5911b14bcb6c6aefc1a913b29c52db4c43b0697dba39c99c3f1c55cb1abf37f from storage" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:47 functional-367186 crio[2943]: time="2025-10-08T14:49:47.464213396Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=b768aada-d7d0-4e20-a422-deb03333da7e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:49:51.045792    4506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:51.046324    4506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:51.047912    4506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:51.048414    4506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:51.049978    4506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 14:49:51 up  2:32,  0 user,  load average: 0.14, 0.06, 0.45
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 14:49:42 functional-367186 kubelet[1801]: E1008 14:49:42.113576    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 14:49:42 functional-367186 kubelet[1801]: I1008 14:49:42.326193    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 14:49:42 functional-367186 kubelet[1801]: E1008 14:49:42.326601    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.436207    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.468290    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:44 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:44 functional-367186 kubelet[1801]:  > podSandboxID="4f5c4547ba25f8047b1a01ec096a800bad6487d4d0d0fe8fd4a152424b0efbf9"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.468378    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:44 functional-367186 kubelet[1801]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:44 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:44 functional-367186 kubelet[1801]: E1008 14:49:44.468407    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.436410    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.464562    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:47 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:47 functional-367186 kubelet[1801]:  > podSandboxID="4a13bc9351a22b93554dcee46226666905c4e1638ab46a476341d1435096d9d8"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.464667    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:47 functional-367186 kubelet[1801]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:47 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:47 functional-367186 kubelet[1801]: E1008 14:49:47.464699    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 14:49:48 functional-367186 kubelet[1801]: E1008 14:49:48.243246    1801 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 08 14:49:49 functional-367186 kubelet[1801]: E1008 14:49:49.114737    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 14:49:49 functional-367186 kubelet[1801]: I1008 14:49:49.327929    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 14:49:49 functional-367186 kubelet[1801]: E1008 14:49:49.328334    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 14:49:49 functional-367186 kubelet[1801]: E1008 14:49:49.988478    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-367186&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 14:49:51 functional-367186 kubelet[1801]: E1008 14:49:51.068473    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-367186.186c8afed11699ef\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8afed11699ef  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:39:41.429266927 +0000 UTC m=+0.550355432,LastTimestamp:2025-10-08 14:39:41.43072231 +0000 UTC m=+0.551810801,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-367186,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (301.292874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 kubectl -- --context functional-367186 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 kubectl -- --context functional-367186 get pods: exit status 1 (93.4873ms)

                                                
                                                
** stderr ** 
	E1008 14:49:58.283991  123926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:58.284343  123926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:58.285768  123926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:58.286122  123926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:49:58.287483  123926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-367186 kubectl -- --context functional-367186 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (288.834772ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                              │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                              │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ -p functional-367186 --alsologtostderr -v=8                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:43 UTC │                     │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.1                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.3                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:latest                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add minikube-local-cache-test:functional-367186                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache delete minikube-local-cache-test:functional-367186                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl images                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ cache   │ functional-367186 cache reload                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ kubectl │ functional-367186 kubectl -- --context functional-367186 get pods                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:43:43
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:43:43.627861  118459 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:43:43.627954  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.627958  118459 out.go:374] Setting ErrFile to fd 2...
	I1008 14:43:43.627962  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.628171  118459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:43:43.628614  118459 out.go:368] Setting JSON to false
	I1008 14:43:43.629495  118459 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8775,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:43:43.629593  118459 start.go:141] virtualization: kvm guest
	I1008 14:43:43.631500  118459 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:43:43.632767  118459 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:43:43.632773  118459 notify.go:220] Checking for updates...
	I1008 14:43:43.634937  118459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:43:43.636218  118459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:43.640666  118459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:43:43.642185  118459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:43:43.643421  118459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:43:43.644930  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:43.645039  118459 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:43:43.667985  118459 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:43:43.668119  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.723136  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.713080092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.723287  118459 docker.go:318] overlay module found
	I1008 14:43:43.725936  118459 out.go:179] * Using the docker driver based on existing profile
	I1008 14:43:43.727069  118459 start.go:305] selected driver: docker
	I1008 14:43:43.727087  118459 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.727171  118459 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:43:43.727263  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.781426  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.772365606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.782086  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:43.782179  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:43.782243  118459 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.784039  118459 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:43:43.785148  118459 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:43:43.786245  118459 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:43:43.787146  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:43.787178  118459 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:43:43.787189  118459 cache.go:58] Caching tarball of preloaded images
	I1008 14:43:43.787237  118459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:43:43.787273  118459 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:43:43.787283  118459 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:43:43.787359  118459 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:43:43.806536  118459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:43:43.806562  118459 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:43:43.806584  118459 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:43:43.806623  118459 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:43:43.806704  118459 start.go:364] duration metric: took 49.444µs to acquireMachinesLock for "functional-367186"
	I1008 14:43:43.806736  118459 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:43:43.806747  118459 fix.go:54] fixHost starting: 
	I1008 14:43:43.806975  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:43.822750  118459 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:43:43.822776  118459 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:43:43.824577  118459 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:43:43.824603  118459 machine.go:93] provisionDockerMachine start ...
	I1008 14:43:43.824673  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:43.841160  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:43.841463  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:43.841483  118459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:43:43.985591  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:43.985624  118459 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:43:43.985682  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.003073  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.003294  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.003316  118459 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:43:44.156671  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:44.156765  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.173583  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.173820  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.173845  118459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:43:44.319171  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:43:44.319200  118459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:43:44.319238  118459 ubuntu.go:190] setting up certificates
	I1008 14:43:44.319253  118459 provision.go:84] configureAuth start
	I1008 14:43:44.319306  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:44.337134  118459 provision.go:143] copyHostCerts
	I1008 14:43:44.337168  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337204  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:43:44.337226  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337295  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:43:44.337373  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337398  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:43:44.337405  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337431  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:43:44.337503  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337524  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:43:44.337531  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337557  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:43:44.337611  118459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:43:44.449681  118459 provision.go:177] copyRemoteCerts
	I1008 14:43:44.449756  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:43:44.449792  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.466984  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:44.569881  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:43:44.569953  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:43:44.587517  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:43:44.587583  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:43:44.605065  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:43:44.605124  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:43:44.622323  118459 provision.go:87] duration metric: took 303.055536ms to configureAuth
	I1008 14:43:44.622354  118459 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:43:44.622537  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:44.622644  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.639387  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.639612  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.639636  118459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:43:44.900547  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:43:44.900571  118459 machine.go:96] duration metric: took 1.07595926s to provisionDockerMachine
	I1008 14:43:44.900586  118459 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:43:44.900600  118459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:43:44.900655  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:43:44.900706  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.917783  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.020925  118459 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:43:45.024356  118459 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1008 14:43:45.024381  118459 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1008 14:43:45.024389  118459 command_runner.go:130] > VERSION_ID="12"
	I1008 14:43:45.024395  118459 command_runner.go:130] > VERSION="12 (bookworm)"
	I1008 14:43:45.024402  118459 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1008 14:43:45.024406  118459 command_runner.go:130] > ID=debian
	I1008 14:43:45.024410  118459 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1008 14:43:45.024415  118459 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1008 14:43:45.024420  118459 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1008 14:43:45.024512  118459 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:43:45.024537  118459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:43:45.024550  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:43:45.024614  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:43:45.024709  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:43:45.024722  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 14:43:45.024832  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:43:45.024842  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> /etc/test/nested/copy/98900/hosts
	I1008 14:43:45.024895  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:43:45.032438  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:45.049657  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:43:45.066943  118459 start.go:296] duration metric: took 166.34143ms for postStartSetup
	I1008 14:43:45.067016  118459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:43:45.067050  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.084921  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.184592  118459 command_runner.go:130] > 50%
	I1008 14:43:45.184676  118459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:43:45.188918  118459 command_runner.go:130] > 148G
	I1008 14:43:45.189157  118459 fix.go:56] duration metric: took 1.382403598s for fixHost
	I1008 14:43:45.189184  118459 start.go:83] releasing machines lock for "functional-367186", held for 1.382467794s
	I1008 14:43:45.189256  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:45.206786  118459 ssh_runner.go:195] Run: cat /version.json
	I1008 14:43:45.206834  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.206924  118459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:43:45.207047  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.224940  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.226308  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.323475  118459 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1008 14:43:45.323661  118459 ssh_runner.go:195] Run: systemctl --version
	I1008 14:43:45.374536  118459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 14:43:45.376350  118459 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1008 14:43:45.376387  118459 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1008 14:43:45.376484  118459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:43:45.412862  118459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:43:45.417295  118459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 14:43:45.417656  118459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:43:45.417717  118459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:43:45.425598  118459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:43:45.425618  118459 start.go:495] detecting cgroup driver to use...
	I1008 14:43:45.425645  118459 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:43:45.425686  118459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:43:45.440680  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:43:45.452844  118459 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:43:45.452899  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:43:45.466598  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:43:45.477998  118459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:43:45.564577  118459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:43:45.653273  118459 docker.go:234] disabling docker service ...
	I1008 14:43:45.653343  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:43:45.667540  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:43:45.679916  118459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:43:45.764673  118459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:43:45.852326  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:43:45.864944  118459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:43:45.878738  118459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 14:43:45.878793  118459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:43:45.878844  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.887987  118459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:43:45.888052  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.896857  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.905895  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.914639  118459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:43:45.922953  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.931880  118459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.940059  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.948635  118459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:43:45.955347  118459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 14:43:45.956050  118459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:43:45.963162  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.045488  118459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:43:46.156934  118459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:43:46.156997  118459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:43:46.161038  118459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 14:43:46.161067  118459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 14:43:46.161077  118459 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1008 14:43:46.161086  118459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.161094  118459 command_runner.go:130] > Access: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161118  118459 command_runner.go:130] > Modify: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161129  118459 command_runner.go:130] > Change: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161138  118459 command_runner.go:130] >  Birth: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161173  118459 start.go:563] Will wait 60s for crictl version
	I1008 14:43:46.161212  118459 ssh_runner.go:195] Run: which crictl
	I1008 14:43:46.164650  118459 command_runner.go:130] > /usr/local/bin/crictl
	I1008 14:43:46.164746  118459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:43:46.189255  118459 command_runner.go:130] > Version:  0.1.0
	I1008 14:43:46.189279  118459 command_runner.go:130] > RuntimeName:  cri-o
	I1008 14:43:46.189294  118459 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1008 14:43:46.189299  118459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 14:43:46.189317  118459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:43:46.189365  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.215704  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.215734  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.215741  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.215746  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.215750  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.215755  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.215762  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.215770  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.215806  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.215819  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.215825  118459 command_runner.go:130] >      static
	I1008 14:43:46.215835  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.215846  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.215857  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.215867  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.215877  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.215885  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.215897  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.215909  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.215921  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.217136  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.243203  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.243231  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.243241  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.243249  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.243256  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.243264  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.243272  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.243281  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.243293  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.243299  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.243304  118459 command_runner.go:130] >      static
	I1008 14:43:46.243312  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.243317  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.243327  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.243336  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.243348  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.243358  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.243374  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.243382  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.243390  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.246714  118459 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:43:46.248034  118459 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:43:46.264534  118459 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:43:46.268778  118459 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1008 14:43:46.268905  118459 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:43:46.269051  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:46.269113  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.298040  118459 command_runner.go:130] > {
	I1008 14:43:46.298059  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.298064  118459 command_runner.go:130] >     {
	I1008 14:43:46.298072  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.298077  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298082  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.298087  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298091  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298100  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.298109  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.298112  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298117  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.298121  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298138  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298146  118459 command_runner.go:130] >     },
	I1008 14:43:46.298151  118459 command_runner.go:130] >     {
	I1008 14:43:46.298164  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.298170  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298175  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.298181  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298185  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298191  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.298201  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.298207  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298210  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.298217  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298225  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298234  118459 command_runner.go:130] >     },
	I1008 14:43:46.298243  118459 command_runner.go:130] >     {
	I1008 14:43:46.298255  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.298262  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298267  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.298273  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298277  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298283  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.298293  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.298298  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298302  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.298309  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.298315  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298323  118459 command_runner.go:130] >     },
	I1008 14:43:46.298328  118459 command_runner.go:130] >     {
	I1008 14:43:46.298341  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.298350  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298359  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.298362  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298371  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298380  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.298387  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.298393  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298398  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.298408  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298417  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298425  118459 command_runner.go:130] >       },
	I1008 14:43:46.298438  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298461  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298467  118459 command_runner.go:130] >     },
	I1008 14:43:46.298472  118459 command_runner.go:130] >     {
	I1008 14:43:46.298481  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.298490  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298499  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.298507  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298514  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298521  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.298532  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.298540  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298548  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.298557  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298566  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298573  118459 command_runner.go:130] >       },
	I1008 14:43:46.298579  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298588  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298597  118459 command_runner.go:130] >     },
	I1008 14:43:46.298602  118459 command_runner.go:130] >     {
	I1008 14:43:46.298612  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.298619  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298628  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.298636  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298647  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298662  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.298676  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.298684  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298690  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.298699  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298705  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298713  118459 command_runner.go:130] >       },
	I1008 14:43:46.298725  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298735  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298744  118459 command_runner.go:130] >     },
	I1008 14:43:46.298752  118459 command_runner.go:130] >     {
	I1008 14:43:46.298762  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.298784  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298800  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.298808  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298815  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298829  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.298843  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.298851  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298860  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.298864  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298867  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298871  118459 command_runner.go:130] >     },
	I1008 14:43:46.298882  118459 command_runner.go:130] >     {
	I1008 14:43:46.298891  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.298895  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298899  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.298903  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298907  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298914  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.298931  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.298937  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298941  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.298948  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298952  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298957  118459 command_runner.go:130] >       },
	I1008 14:43:46.298961  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298967  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298971  118459 command_runner.go:130] >     },
	I1008 14:43:46.298978  118459 command_runner.go:130] >     {
	I1008 14:43:46.298987  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.298996  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.299004  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.299025  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299035  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.299047  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.299060  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.299068  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299074  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.299081  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.299087  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.299095  118459 command_runner.go:130] >       },
	I1008 14:43:46.299100  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.299108  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.299113  118459 command_runner.go:130] >     }
	I1008 14:43:46.299117  118459 command_runner.go:130] >   ]
	I1008 14:43:46.299125  118459 command_runner.go:130] > }
	I1008 14:43:46.300090  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.300109  118459 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:43:46.300168  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.325949  118459 command_runner.go:130] > {
	I1008 14:43:46.325970  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.325974  118459 command_runner.go:130] >     {
	I1008 14:43:46.325985  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.325990  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.325996  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.325999  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326003  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326016  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.326031  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.326040  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326047  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.326055  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326063  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326068  118459 command_runner.go:130] >     },
	I1008 14:43:46.326072  118459 command_runner.go:130] >     {
	I1008 14:43:46.326083  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.326089  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326094  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.326100  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326104  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326125  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.326136  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.326142  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326147  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.326151  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326158  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326163  118459 command_runner.go:130] >     },
	I1008 14:43:46.326166  118459 command_runner.go:130] >     {
	I1008 14:43:46.326172  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.326178  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326183  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.326188  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326192  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326201  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.326208  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.326213  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326219  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.326223  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.326226  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326229  118459 command_runner.go:130] >     },
	I1008 14:43:46.326232  118459 command_runner.go:130] >     {
	I1008 14:43:46.326238  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.326245  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326249  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.326252  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326256  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326262  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.326269  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.326275  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326279  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.326284  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326287  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326293  118459 command_runner.go:130] >       },
	I1008 14:43:46.326307  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326314  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326317  118459 command_runner.go:130] >     },
	I1008 14:43:46.326320  118459 command_runner.go:130] >     {
	I1008 14:43:46.326326  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.326331  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326335  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.326338  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326342  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326349  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.326358  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.326361  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326366  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.326369  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326373  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326378  118459 command_runner.go:130] >       },
	I1008 14:43:46.326382  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326385  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326392  118459 command_runner.go:130] >     },
	I1008 14:43:46.326395  118459 command_runner.go:130] >     {
	I1008 14:43:46.326401  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.326407  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326412  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.326415  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326419  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326429  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.326436  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.326453  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326460  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.326468  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326472  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326475  118459 command_runner.go:130] >       },
	I1008 14:43:46.326479  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326490  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326496  118459 command_runner.go:130] >     },
	I1008 14:43:46.326499  118459 command_runner.go:130] >     {
	I1008 14:43:46.326505  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.326511  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326515  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.326518  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326522  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326531  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.326538  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.326543  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326548  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.326551  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326555  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326558  118459 command_runner.go:130] >     },
	I1008 14:43:46.326561  118459 command_runner.go:130] >     {
	I1008 14:43:46.326567  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.326571  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326575  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.326578  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326582  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326588  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.326611  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.326617  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326621  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.326625  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326631  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326634  118459 command_runner.go:130] >       },
	I1008 14:43:46.326638  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326643  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326646  118459 command_runner.go:130] >     },
	I1008 14:43:46.326650  118459 command_runner.go:130] >     {
	I1008 14:43:46.326655  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.326666  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326673  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.326676  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326680  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326688  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.326698  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.326705  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326709  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.326714  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326718  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.326722  118459 command_runner.go:130] >       },
	I1008 14:43:46.326726  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326732  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.326735  118459 command_runner.go:130] >     }
	I1008 14:43:46.326738  118459 command_runner.go:130] >   ]
	I1008 14:43:46.326740  118459 command_runner.go:130] > }
	I1008 14:43:46.326842  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.326863  118459 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:43:46.326869  118459 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:43:46.326972  118459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:43:46.327030  118459 ssh_runner.go:195] Run: crio config
	I1008 14:43:46.368296  118459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 14:43:46.368332  118459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 14:43:46.368340  118459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 14:43:46.368344  118459 command_runner.go:130] > #
	I1008 14:43:46.368350  118459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 14:43:46.368356  118459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 14:43:46.368362  118459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 14:43:46.368376  118459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 14:43:46.368381  118459 command_runner.go:130] > # reload'.
	I1008 14:43:46.368392  118459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 14:43:46.368405  118459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 14:43:46.368418  118459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 14:43:46.368433  118459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 14:43:46.368458  118459 command_runner.go:130] > [crio]
	I1008 14:43:46.368472  118459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 14:43:46.368480  118459 command_runner.go:130] > # containers images, in this directory.
	I1008 14:43:46.368492  118459 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1008 14:43:46.368502  118459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 14:43:46.368514  118459 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1008 14:43:46.368525  118459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 14:43:46.368536  118459 command_runner.go:130] > # imagestore = ""
	I1008 14:43:46.368546  118459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 14:43:46.368559  118459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 14:43:46.368566  118459 command_runner.go:130] > # storage_driver = "overlay"
	I1008 14:43:46.368580  118459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 14:43:46.368587  118459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 14:43:46.368594  118459 command_runner.go:130] > # storage_option = [
	I1008 14:43:46.368599  118459 command_runner.go:130] > # ]
	I1008 14:43:46.368608  118459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 14:43:46.368621  118459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 14:43:46.368631  118459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 14:43:46.368640  118459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 14:43:46.368651  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 14:43:46.368666  118459 command_runner.go:130] > # always happen on a node reboot
	I1008 14:43:46.368678  118459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 14:43:46.368702  118459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 14:43:46.368714  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 14:43:46.368726  118459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 14:43:46.368736  118459 command_runner.go:130] > # version_file_persist = ""
	I1008 14:43:46.368751  118459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 14:43:46.368767  118459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 14:43:46.368775  118459 command_runner.go:130] > # internal_wipe = true
	I1008 14:43:46.368791  118459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 14:43:46.368802  118459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 14:43:46.368820  118459 command_runner.go:130] > # internal_repair = true
	I1008 14:43:46.368834  118459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 14:43:46.368847  118459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 14:43:46.368859  118459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 14:43:46.368869  118459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 14:43:46.368882  118459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 14:43:46.368891  118459 command_runner.go:130] > [crio.api]
	I1008 14:43:46.368900  118459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 14:43:46.368910  118459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 14:43:46.368921  118459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 14:43:46.368931  118459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 14:43:46.368942  118459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 14:43:46.368954  118459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 14:43:46.368963  118459 command_runner.go:130] > # stream_port = "0"
	I1008 14:43:46.368971  118459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 14:43:46.368981  118459 command_runner.go:130] > # stream_enable_tls = false
	I1008 14:43:46.368992  118459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 14:43:46.369002  118459 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 14:43:46.369012  118459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 14:43:46.369025  118459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369033  118459 command_runner.go:130] > # stream_tls_cert = ""
	I1008 14:43:46.369043  118459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 14:43:46.369055  118459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369075  118459 command_runner.go:130] > # stream_tls_key = ""
	I1008 14:43:46.369092  118459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 14:43:46.369106  118459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 14:43:46.369121  118459 command_runner.go:130] > # automatically pick up the changes.
	I1008 14:43:46.369130  118459 command_runner.go:130] > # stream_tls_ca = ""
	I1008 14:43:46.369153  118459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369163  118459 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1008 14:43:46.369176  118459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369186  118459 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1008 14:43:46.369197  118459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 14:43:46.369209  118459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 14:43:46.369219  118459 command_runner.go:130] > [crio.runtime]
	I1008 14:43:46.369229  118459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 14:43:46.369240  118459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 14:43:46.369246  118459 command_runner.go:130] > # "nofile=1024:2048"
	I1008 14:43:46.369260  118459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 14:43:46.369269  118459 command_runner.go:130] > # default_ulimits = [
	I1008 14:43:46.369275  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369288  118459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 14:43:46.369296  118459 command_runner.go:130] > # no_pivot = false
	I1008 14:43:46.369305  118459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 14:43:46.369317  118459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 14:43:46.369327  118459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 14:43:46.369338  118459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 14:43:46.369348  118459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 14:43:46.369359  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369368  118459 command_runner.go:130] > # conmon = ""
	I1008 14:43:46.369375  118459 command_runner.go:130] > # Cgroup setting for conmon
	I1008 14:43:46.369386  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 14:43:46.369393  118459 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 14:43:46.369402  118459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 14:43:46.369410  118459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 14:43:46.369421  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369430  118459 command_runner.go:130] > # conmon_env = [
	I1008 14:43:46.369435  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369456  118459 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 14:43:46.369465  118459 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 14:43:46.369475  118459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 14:43:46.369484  118459 command_runner.go:130] > # default_env = [
	I1008 14:43:46.369489  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369498  118459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 14:43:46.369516  118459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 14:43:46.369528  118459 command_runner.go:130] > # selinux = false
	I1008 14:43:46.369539  118459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 14:43:46.369555  118459 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1008 14:43:46.369564  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369570  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.369582  118459 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1008 14:43:46.369602  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369609  118459 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1008 14:43:46.369619  118459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 14:43:46.369631  118459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 14:43:46.369644  118459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 14:43:46.369653  118459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 14:43:46.369661  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369672  118459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 14:43:46.369680  118459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 14:43:46.369690  118459 command_runner.go:130] > # the cgroup blockio controller.
	I1008 14:43:46.369697  118459 command_runner.go:130] > # blockio_config_file = ""
	I1008 14:43:46.369709  118459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 14:43:46.369718  118459 command_runner.go:130] > # blockio parameters.
	I1008 14:43:46.369724  118459 command_runner.go:130] > # blockio_reload = false
	I1008 14:43:46.369735  118459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 14:43:46.369744  118459 command_runner.go:130] > # irqbalance daemon.
	I1008 14:43:46.369857  118459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 14:43:46.369873  118459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 14:43:46.369884  118459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 14:43:46.369898  118459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 14:43:46.369909  118459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 14:43:46.369924  118459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 14:43:46.369934  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369943  118459 command_runner.go:130] > # rdt_config_file = ""
	I1008 14:43:46.369950  118459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 14:43:46.369959  118459 command_runner.go:130] > # cgroup_manager = "systemd"
	I1008 14:43:46.369968  118459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 14:43:46.369979  118459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 14:43:46.369989  118459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 14:43:46.370002  118459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 14:43:46.370011  118459 command_runner.go:130] > # will be added.
	I1008 14:43:46.370027  118459 command_runner.go:130] > # default_capabilities = [
	I1008 14:43:46.370036  118459 command_runner.go:130] > # 	"CHOWN",
	I1008 14:43:46.370044  118459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 14:43:46.370051  118459 command_runner.go:130] > # 	"FSETID",
	I1008 14:43:46.370054  118459 command_runner.go:130] > # 	"FOWNER",
	I1008 14:43:46.370062  118459 command_runner.go:130] > # 	"SETGID",
	I1008 14:43:46.370083  118459 command_runner.go:130] > # 	"SETUID",
	I1008 14:43:46.370093  118459 command_runner.go:130] > # 	"SETPCAP",
	I1008 14:43:46.370099  118459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 14:43:46.370108  118459 command_runner.go:130] > # 	"KILL",
	I1008 14:43:46.370113  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370127  118459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 14:43:46.370140  118459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 14:43:46.370152  118459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 14:43:46.370164  118459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 14:43:46.370173  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370183  118459 command_runner.go:130] > default_sysctls = [
	I1008 14:43:46.370193  118459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 14:43:46.370198  118459 command_runner.go:130] > ]
	I1008 14:43:46.370209  118459 command_runner.go:130] > # List of devices on the host that a
	I1008 14:43:46.370249  118459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 14:43:46.370259  118459 command_runner.go:130] > # allowed_devices = [
	I1008 14:43:46.370266  118459 command_runner.go:130] > # 	"/dev/fuse",
	I1008 14:43:46.370270  118459 command_runner.go:130] > # 	"/dev/net/tun",
	I1008 14:43:46.370277  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370285  118459 command_runner.go:130] > # List of additional devices. specified as
	I1008 14:43:46.370300  118459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 14:43:46.370312  118459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 14:43:46.370324  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370333  118459 command_runner.go:130] > # additional_devices = [
	I1008 14:43:46.370341  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370351  118459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 14:43:46.370360  118459 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 14:43:46.370366  118459 command_runner.go:130] > # 	"/etc/cdi",
	I1008 14:43:46.370370  118459 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 14:43:46.370378  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370387  118459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 14:43:46.370400  118459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 14:43:46.370411  118459 command_runner.go:130] > # Defaults to false.
	I1008 14:43:46.370422  118459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 14:43:46.370434  118459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 14:43:46.370462  118459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 14:43:46.370470  118459 command_runner.go:130] > # hooks_dir = [
	I1008 14:43:46.370481  118459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 14:43:46.370491  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370503  118459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 14:43:46.370515  118459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 14:43:46.370526  118459 command_runner.go:130] > # its default mounts from the following two files:
	I1008 14:43:46.370532  118459 command_runner.go:130] > #
	I1008 14:43:46.370538  118459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 14:43:46.370550  118459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 14:43:46.370562  118459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 14:43:46.370571  118459 command_runner.go:130] > #
	I1008 14:43:46.370580  118459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 14:43:46.370593  118459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 14:43:46.370605  118459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 14:43:46.370615  118459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 14:43:46.370623  118459 command_runner.go:130] > #
	I1008 14:43:46.370629  118459 command_runner.go:130] > # default_mounts_file = ""
	I1008 14:43:46.370637  118459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 14:43:46.370647  118459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 14:43:46.370657  118459 command_runner.go:130] > # pids_limit = -1
	I1008 14:43:46.370667  118459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 14:43:46.370679  118459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 14:43:46.370693  118459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 14:43:46.370708  118459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 14:43:46.370717  118459 command_runner.go:130] > # log_size_max = -1
	I1008 14:43:46.370728  118459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 14:43:46.370735  118459 command_runner.go:130] > # log_to_journald = false
	I1008 14:43:46.370743  118459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 14:43:46.370755  118459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 14:43:46.370763  118459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 14:43:46.370774  118459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 14:43:46.370785  118459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 14:43:46.370794  118459 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 14:43:46.370804  118459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 14:43:46.370819  118459 command_runner.go:130] > # read_only = false
	I1008 14:43:46.370828  118459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 14:43:46.370841  118459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 14:43:46.370850  118459 command_runner.go:130] > # live configuration reload.
	I1008 14:43:46.370856  118459 command_runner.go:130] > # log_level = "info"
	I1008 14:43:46.370868  118459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 14:43:46.370884  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.370893  118459 command_runner.go:130] > # log_filter = ""
	I1008 14:43:46.370905  118459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370917  118459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 14:43:46.370923  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370934  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.370943  118459 command_runner.go:130] > # uid_mappings = ""
	I1008 14:43:46.370955  118459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370967  118459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 14:43:46.370979  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370994  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371003  118459 command_runner.go:130] > # gid_mappings = ""
	I1008 14:43:46.371012  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 14:43:46.371023  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371037  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371055  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371064  118459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 14:43:46.371076  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 14:43:46.371087  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371100  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371112  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371122  118459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 14:43:46.371134  118459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 14:43:46.371146  118459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 14:43:46.371158  118459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 14:43:46.371168  118459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 14:43:46.371179  118459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 14:43:46.371188  118459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 14:43:46.371193  118459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 14:43:46.371204  118459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 14:43:46.371214  118459 command_runner.go:130] > # drop_infra_ctr = true
	I1008 14:43:46.371224  118459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 14:43:46.371235  118459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 14:43:46.371249  118459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 14:43:46.371258  118459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 14:43:46.371276  118459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 14:43:46.371285  118459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 14:43:46.371294  118459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 14:43:46.371306  118459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 14:43:46.371316  118459 command_runner.go:130] > # shared_cpuset = ""
	I1008 14:43:46.371326  118459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 14:43:46.371337  118459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 14:43:46.371346  118459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 14:43:46.371358  118459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 14:43:46.371366  118459 command_runner.go:130] > # pinns_path = ""
	I1008 14:43:46.371374  118459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 14:43:46.371385  118459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 14:43:46.371395  118459 command_runner.go:130] > # enable_criu_support = true
	I1008 14:43:46.371405  118459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 14:43:46.371417  118459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 14:43:46.371422  118459 command_runner.go:130] > # enable_pod_events = false
	I1008 14:43:46.371434  118459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 14:43:46.371453  118459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 14:43:46.371465  118459 command_runner.go:130] > # default_runtime = "crun"
	I1008 14:43:46.371473  118459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 14:43:46.371484  118459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 14:43:46.371501  118459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 14:43:46.371511  118459 command_runner.go:130] > # creation as a file is not desired either.
	I1008 14:43:46.371526  118459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 14:43:46.371537  118459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 14:43:46.371545  118459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 14:43:46.371552  118459 command_runner.go:130] > # ]
	I1008 14:43:46.371559  118459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 14:43:46.371568  118459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 14:43:46.371574  118459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 14:43:46.371579  118459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 14:43:46.371584  118459 command_runner.go:130] > #
	I1008 14:43:46.371589  118459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 14:43:46.371595  118459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 14:43:46.371599  118459 command_runner.go:130] > # runtime_type = "oci"
	I1008 14:43:46.371606  118459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 14:43:46.371610  118459 command_runner.go:130] > # inherit_default_runtime = false
	I1008 14:43:46.371621  118459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 14:43:46.371628  118459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 14:43:46.371633  118459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 14:43:46.371639  118459 command_runner.go:130] > # monitor_env = []
	I1008 14:43:46.371643  118459 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 14:43:46.371649  118459 command_runner.go:130] > # allowed_annotations = []
	I1008 14:43:46.371654  118459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 14:43:46.371660  118459 command_runner.go:130] > # no_sync_log = false
	I1008 14:43:46.371664  118459 command_runner.go:130] > # default_annotations = {}
	I1008 14:43:46.371672  118459 command_runner.go:130] > # stream_websockets = false
	I1008 14:43:46.371676  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.371698  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.371705  118459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 14:43:46.371711  118459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 14:43:46.371719  118459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 14:43:46.371727  118459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 14:43:46.371731  118459 command_runner.go:130] > #   in $PATH.
	I1008 14:43:46.371736  118459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 14:43:46.371743  118459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 14:43:46.371748  118459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 14:43:46.371753  118459 command_runner.go:130] > #   state.
	I1008 14:43:46.371759  118459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 14:43:46.371767  118459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 14:43:46.371772  118459 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1008 14:43:46.371780  118459 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1008 14:43:46.371785  118459 command_runner.go:130] > #   the values from the default runtime on load time.
	I1008 14:43:46.371793  118459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 14:43:46.371801  118459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 14:43:46.371819  118459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 14:43:46.371827  118459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 14:43:46.371832  118459 command_runner.go:130] > #   The currently recognized values are:
	I1008 14:43:46.371840  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 14:43:46.371846  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 14:43:46.371854  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 14:43:46.371859  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 14:43:46.371869  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 14:43:46.371877  118459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 14:43:46.371885  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 14:43:46.371894  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 14:43:46.371900  118459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 14:43:46.371908  118459 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1008 14:43:46.371917  118459 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1008 14:43:46.371926  118459 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1008 14:43:46.371937  118459 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1008 14:43:46.371943  118459 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1008 14:43:46.371951  118459 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1008 14:43:46.371958  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1008 14:43:46.371966  118459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 14:43:46.371973  118459 command_runner.go:130] > #   deprecated option "conmon".
	I1008 14:43:46.371980  118459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 14:43:46.371987  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 14:43:46.371993  118459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 14:43:46.372000  118459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 14:43:46.372006  118459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1008 14:43:46.372013  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 14:43:46.372019  118459 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1008 14:43:46.372025  118459 command_runner.go:130] > #   conmon-rs by using:
	I1008 14:43:46.372032  118459 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1008 14:43:46.372041  118459 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1008 14:43:46.372050  118459 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1008 14:43:46.372060  118459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 14:43:46.372067  118459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 14:43:46.372073  118459 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1008 14:43:46.372083  118459 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1008 14:43:46.372090  118459 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1008 14:43:46.372097  118459 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1008 14:43:46.372107  118459 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1008 14:43:46.372116  118459 command_runner.go:130] > #   when a machine crash happens.
	I1008 14:43:46.372125  118459 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1008 14:43:46.372132  118459 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1008 14:43:46.372139  118459 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1008 14:43:46.372145  118459 command_runner.go:130] > #   seccomp profile for the runtime.
	I1008 14:43:46.372151  118459 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1008 14:43:46.372160  118459 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1008 14:43:46.372165  118459 command_runner.go:130] > #
	I1008 14:43:46.372170  118459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 14:43:46.372175  118459 command_runner.go:130] > #
	I1008 14:43:46.372181  118459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 14:43:46.372187  118459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 14:43:46.372192  118459 command_runner.go:130] > #
	I1008 14:43:46.372198  118459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 14:43:46.372205  118459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 14:43:46.372208  118459 command_runner.go:130] > #
	I1008 14:43:46.372214  118459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 14:43:46.372219  118459 command_runner.go:130] > # feature.
	I1008 14:43:46.372222  118459 command_runner.go:130] > #
	I1008 14:43:46.372228  118459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 14:43:46.372235  118459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 14:43:46.372242  118459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 14:43:46.372251  118459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 14:43:46.372259  118459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 14:43:46.372261  118459 command_runner.go:130] > #
	I1008 14:43:46.372267  118459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 14:43:46.372275  118459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 14:43:46.372281  118459 command_runner.go:130] > #
	I1008 14:43:46.372286  118459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 14:43:46.372294  118459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 14:43:46.372297  118459 command_runner.go:130] > #
	I1008 14:43:46.372302  118459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 14:43:46.372310  118459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 14:43:46.372314  118459 command_runner.go:130] > # limitation.
	I1008 14:43:46.372320  118459 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1008 14:43:46.372325  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1008 14:43:46.372330  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372334  118459 command_runner.go:130] > runtime_root = "/run/crun"
	I1008 14:43:46.372343  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372349  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372353  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372358  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372363  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372367  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372374  118459 command_runner.go:130] > allowed_annotations = [
	I1008 14:43:46.372380  118459 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1008 14:43:46.372384  118459 command_runner.go:130] > ]
	I1008 14:43:46.372391  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372395  118459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 14:43:46.372402  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1008 14:43:46.372406  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372411  118459 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 14:43:46.372415  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372422  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372425  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372432  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372436  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372453  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372461  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372473  118459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 14:43:46.372482  118459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 14:43:46.372491  118459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 14:43:46.372498  118459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 14:43:46.372509  118459 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1008 14:43:46.372520  118459 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1008 14:43:46.372530  118459 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1008 14:43:46.372537  118459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 14:43:46.372545  118459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 14:43:46.372555  118459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 14:43:46.372562  118459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 14:43:46.372569  118459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 14:43:46.372574  118459 command_runner.go:130] > # Example:
	I1008 14:43:46.372578  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 14:43:46.372585  118459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 14:43:46.372591  118459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 14:43:46.372602  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 14:43:46.372608  118459 command_runner.go:130] > # cpuset = "0-1"
	I1008 14:43:46.372612  118459 command_runner.go:130] > # cpushares = "5"
	I1008 14:43:46.372617  118459 command_runner.go:130] > # cpuquota = "1000"
	I1008 14:43:46.372621  118459 command_runner.go:130] > # cpuperiod = "100000"
	I1008 14:43:46.372626  118459 command_runner.go:130] > # cpulimit = "35"
	I1008 14:43:46.372630  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.372634  118459 command_runner.go:130] > # The workload name is workload-type.
	I1008 14:43:46.372643  118459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 14:43:46.372650  118459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 14:43:46.372655  118459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 14:43:46.372665  118459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 14:43:46.372682  118459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 14:43:46.372689  118459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 14:43:46.372695  118459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 14:43:46.372701  118459 command_runner.go:130] > # Default value is set to true
	I1008 14:43:46.372706  118459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 14:43:46.372713  118459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 14:43:46.372717  118459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 14:43:46.372724  118459 command_runner.go:130] > # Default value is set to 'false'
	I1008 14:43:46.372728  118459 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 14:43:46.372735  118459 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1008 14:43:46.372743  118459 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1008 14:43:46.372748  118459 command_runner.go:130] > # timezone = ""
	I1008 14:43:46.372756  118459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 14:43:46.372761  118459 command_runner.go:130] > #
	I1008 14:43:46.372767  118459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 14:43:46.372775  118459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1008 14:43:46.372781  118459 command_runner.go:130] > [crio.image]
	I1008 14:43:46.372786  118459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 14:43:46.372792  118459 command_runner.go:130] > # default_transport = "docker://"
	I1008 14:43:46.372798  118459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 14:43:46.372822  118459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372828  118459 command_runner.go:130] > # global_auth_file = ""
	I1008 14:43:46.372833  118459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 14:43:46.372840  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372844  118459 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.372853  118459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 14:43:46.372861  118459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372871  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372877  118459 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 14:43:46.372883  118459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 14:43:46.372888  118459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 14:43:46.372896  118459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 14:43:46.372902  118459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 14:43:46.372908  118459 command_runner.go:130] > # pause_command = "/pause"
	I1008 14:43:46.372914  118459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 14:43:46.372922  118459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 14:43:46.372927  118459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 14:43:46.372935  118459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 14:43:46.372940  118459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 14:43:46.372948  118459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 14:43:46.372952  118459 command_runner.go:130] > # pinned_images = [
	I1008 14:43:46.372958  118459 command_runner.go:130] > # ]
	I1008 14:43:46.372963  118459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 14:43:46.372972  118459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 14:43:46.372978  118459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 14:43:46.372986  118459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 14:43:46.372991  118459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 14:43:46.372997  118459 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1008 14:43:46.373003  118459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 14:43:46.373012  118459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 14:43:46.373021  118459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 14:43:46.373029  118459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 14:43:46.373034  118459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 14:43:46.373042  118459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 14:43:46.373051  118459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 14:43:46.373058  118459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 14:43:46.373065  118459 command_runner.go:130] > # changing them here.
	I1008 14:43:46.373070  118459 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1008 14:43:46.373076  118459 command_runner.go:130] > # insecure_registries = [
	I1008 14:43:46.373079  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373087  118459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 14:43:46.373095  118459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 14:43:46.373104  118459 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 14:43:46.373112  118459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 14:43:46.373116  118459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 14:43:46.373124  118459 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1008 14:43:46.373130  118459 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1008 14:43:46.373134  118459 command_runner.go:130] > # auto_reload_registries = false
	I1008 14:43:46.373142  118459 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1008 14:43:46.373149  118459 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1008 14:43:46.373157  118459 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1008 14:43:46.373162  118459 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1008 14:43:46.373168  118459 command_runner.go:130] > # The mode of short name resolution.
	I1008 14:43:46.373174  118459 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1008 14:43:46.373183  118459 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1008 14:43:46.373190  118459 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1008 14:43:46.373195  118459 command_runner.go:130] > # short_name_mode = "enforcing"
	I1008 14:43:46.373204  118459 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1008 14:43:46.373212  118459 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1008 14:43:46.373216  118459 command_runner.go:130] > # oci_artifact_mount_support = true
	I1008 14:43:46.373224  118459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 14:43:46.373228  118459 command_runner.go:130] > # CNI plugins.
	I1008 14:43:46.373234  118459 command_runner.go:130] > [crio.network]
	I1008 14:43:46.373239  118459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 14:43:46.373246  118459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 14:43:46.373251  118459 command_runner.go:130] > # cni_default_network = ""
	I1008 14:43:46.373259  118459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 14:43:46.373266  118459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 14:43:46.373271  118459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 14:43:46.373277  118459 command_runner.go:130] > # plugin_dirs = [
	I1008 14:43:46.373280  118459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 14:43:46.373284  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373289  118459 command_runner.go:130] > # List of included pod metrics.
	I1008 14:43:46.373295  118459 command_runner.go:130] > # included_pod_metrics = [
	I1008 14:43:46.373297  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373304  118459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 14:43:46.373310  118459 command_runner.go:130] > [crio.metrics]
	I1008 14:43:46.373314  118459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 14:43:46.373320  118459 command_runner.go:130] > # enable_metrics = false
	I1008 14:43:46.373324  118459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 14:43:46.373331  118459 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 14:43:46.373337  118459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 14:43:46.373347  118459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 14:43:46.373355  118459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 14:43:46.373359  118459 command_runner.go:130] > # metrics_collectors = [
	I1008 14:43:46.373364  118459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 14:43:46.373368  118459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 14:43:46.373371  118459 command_runner.go:130] > # 	"containers_oom_total",
	I1008 14:43:46.373374  118459 command_runner.go:130] > # 	"processes_defunct",
	I1008 14:43:46.373378  118459 command_runner.go:130] > # 	"operations_total",
	I1008 14:43:46.373381  118459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 14:43:46.373386  118459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 14:43:46.373389  118459 command_runner.go:130] > # 	"operations_errors_total",
	I1008 14:43:46.373393  118459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 14:43:46.373397  118459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 14:43:46.373400  118459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 14:43:46.373408  118459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 14:43:46.373411  118459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 14:43:46.373415  118459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 14:43:46.373422  118459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 14:43:46.373426  118459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 14:43:46.373430  118459 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1008 14:43:46.373436  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373450  118459 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1008 14:43:46.373460  118459 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1008 14:43:46.373468  118459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 14:43:46.373475  118459 command_runner.go:130] > # metrics_port = 9090
	I1008 14:43:46.373480  118459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 14:43:46.373486  118459 command_runner.go:130] > # metrics_socket = ""
	I1008 14:43:46.373490  118459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 14:43:46.373499  118459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 14:43:46.373508  118459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 14:43:46.373514  118459 command_runner.go:130] > # certificate on any modification event.
	I1008 14:43:46.373518  118459 command_runner.go:130] > # metrics_cert = ""
	I1008 14:43:46.373525  118459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 14:43:46.373530  118459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 14:43:46.373536  118459 command_runner.go:130] > # metrics_key = ""
	I1008 14:43:46.373542  118459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 14:43:46.373548  118459 command_runner.go:130] > [crio.tracing]
	I1008 14:43:46.373554  118459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 14:43:46.373564  118459 command_runner.go:130] > # enable_tracing = false
	I1008 14:43:46.373571  118459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 14:43:46.373576  118459 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1008 14:43:46.373584  118459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 14:43:46.373591  118459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 14:43:46.373598  118459 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 14:43:46.373604  118459 command_runner.go:130] > [crio.nri]
	I1008 14:43:46.373608  118459 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 14:43:46.373614  118459 command_runner.go:130] > # enable_nri = true
	I1008 14:43:46.373618  118459 command_runner.go:130] > # NRI socket to listen on.
	I1008 14:43:46.373624  118459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 14:43:46.373628  118459 command_runner.go:130] > # NRI plugin directory to use.
	I1008 14:43:46.373635  118459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 14:43:46.373640  118459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 14:43:46.373647  118459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 14:43:46.373653  118459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 14:43:46.373688  118459 command_runner.go:130] > # nri_disable_connections = false
	I1008 14:43:46.373696  118459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 14:43:46.373701  118459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 14:43:46.373705  118459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 14:43:46.373712  118459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 14:43:46.373717  118459 command_runner.go:130] > # NRI default validator configuration.
	I1008 14:43:46.373725  118459 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1008 14:43:46.373733  118459 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1008 14:43:46.373737  118459 command_runner.go:130] > # can be restricted/rejected:
	I1008 14:43:46.373743  118459 command_runner.go:130] > # - OCI hook injection
	I1008 14:43:46.373748  118459 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1008 14:43:46.373755  118459 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1008 14:43:46.373760  118459 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1008 14:43:46.373766  118459 command_runner.go:130] > # - adjustment of linux namespaces
	I1008 14:43:46.373772  118459 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1008 14:43:46.373780  118459 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1008 14:43:46.373788  118459 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1008 14:43:46.373791  118459 command_runner.go:130] > #
	I1008 14:43:46.373795  118459 command_runner.go:130] > # [crio.nri.default_validator]
	I1008 14:43:46.373802  118459 command_runner.go:130] > # nri_enable_default_validator = false
	I1008 14:43:46.373811  118459 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1008 14:43:46.373819  118459 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1008 14:43:46.373827  118459 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1008 14:43:46.373832  118459 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1008 14:43:46.373839  118459 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1008 14:43:46.373843  118459 command_runner.go:130] > # nri_validator_required_plugins = [
	I1008 14:43:46.373848  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373853  118459 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1008 14:43:46.373861  118459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 14:43:46.373865  118459 command_runner.go:130] > [crio.stats]
	I1008 14:43:46.373873  118459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 14:43:46.373880  118459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 14:43:46.373887  118459 command_runner.go:130] > # stats_collection_period = 0
	I1008 14:43:46.373892  118459 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1008 14:43:46.373900  118459 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1008 14:43:46.373907  118459 command_runner.go:130] > # collection_period = 0
	I1008 14:43:46.373928  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353034685Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1008 14:43:46.373938  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353062648Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1008 14:43:46.373948  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.35308236Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1008 14:43:46.373956  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353100078Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1008 14:43:46.373967  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353161884Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:46.373976  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353351718Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1008 14:43:46.373988  118459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 14:43:46.374064  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:46.374077  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:46.374093  118459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:43:46.374116  118459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:43:46.374237  118459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:43:46.374300  118459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:43:46.382363  118459 command_runner.go:130] > kubeadm
	I1008 14:43:46.382384  118459 command_runner.go:130] > kubectl
	I1008 14:43:46.382389  118459 command_runner.go:130] > kubelet
	I1008 14:43:46.382411  118459 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:43:46.382482  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:43:46.390162  118459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:43:46.403097  118459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:43:46.415613  118459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:43:46.428192  118459 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:43:46.432007  118459 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1008 14:43:46.432080  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.522533  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:46.535801  118459 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:43:46.535827  118459 certs.go:195] generating shared ca certs ...
	I1008 14:43:46.535849  118459 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:46.536002  118459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:43:46.536048  118459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:43:46.536069  118459 certs.go:257] generating profile certs ...
	I1008 14:43:46.536190  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:43:46.536242  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:43:46.536277  118459 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:43:46.536291  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:43:46.536306  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:43:46.536318  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:43:46.536330  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:43:46.536342  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:43:46.536377  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:43:46.536393  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:43:46.536405  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:43:46.536476  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:43:46.536513  118459 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:43:46.536523  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:43:46.536550  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:43:46.536574  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:43:46.536595  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:43:46.536635  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:46.536660  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.536675  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.536688  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.537241  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:43:46.555642  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:43:46.572819  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:43:46.590661  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:43:46.607931  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:43:46.625383  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:43:46.642336  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:43:46.659419  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:43:46.676486  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:43:46.693083  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:43:46.710326  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:43:46.727941  118459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:43:46.740780  118459 ssh_runner.go:195] Run: openssl version
	I1008 14:43:46.747268  118459 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1008 14:43:46.747351  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:43:46.756220  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760077  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760121  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760189  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.794493  118459 command_runner.go:130] > 3ec20f2e
	I1008 14:43:46.794726  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:43:46.803126  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:43:46.811855  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815648  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815718  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815789  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.849403  118459 command_runner.go:130] > b5213941
	I1008 14:43:46.849676  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:43:46.857958  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:43:46.866212  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869736  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869766  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869798  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.904128  118459 command_runner.go:130] > 51391683
	I1008 14:43:46.904402  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:43:46.913326  118459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917356  118459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917385  118459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 14:43:46.917396  118459 command_runner.go:130] > Device: 8,1	Inode: 591874      Links: 1
	I1008 14:43:46.917405  118459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.917413  118459 command_runner.go:130] > Access: 2025-10-08 14:39:39.676864991 +0000
	I1008 14:43:46.917418  118459 command_runner.go:130] > Modify: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917426  118459 command_runner.go:130] > Change: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917431  118459 command_runner.go:130] >  Birth: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917505  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:43:46.951955  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.952157  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:43:46.986574  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.986789  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:43:47.021180  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.021253  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:43:47.054995  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.055238  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:43:47.088666  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.089049  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:43:47.123893  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.124156  118459 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:47.124254  118459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:43:47.124313  118459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:43:47.152244  118459 cri.go:89] found id: ""
	I1008 14:43:47.152307  118459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:43:47.160274  118459 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1008 14:43:47.160294  118459 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1008 14:43:47.160299  118459 command_runner.go:130] > /var/lib/minikube/etcd:
	I1008 14:43:47.160318  118459 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:43:47.160325  118459 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:43:47.160370  118459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:43:47.167663  118459 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:43:47.167758  118459 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.167803  118459 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "functional-367186" cluster setting kubeconfig missing "functional-367186" context setting]
	I1008 14:43:47.168217  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.169051  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.169269  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.170001  118459 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:43:47.170034  118459 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:43:47.170046  118459 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:43:47.170052  118459 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:43:47.170058  118459 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:43:47.170055  118459 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:43:47.170535  118459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:43:47.177804  118459 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 14:43:47.177829  118459 kubeadm.go:601] duration metric: took 17.498385ms to restartPrimaryControlPlane
	I1008 14:43:47.177836  118459 kubeadm.go:402] duration metric: took 53.689897ms to StartCluster
	I1008 14:43:47.177851  118459 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.177960  118459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.178692  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.178964  118459 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:43:47.179000  118459 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:43:47.179182  118459 addons.go:69] Setting storage-provisioner=true in profile "functional-367186"
	I1008 14:43:47.179161  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:47.179199  118459 addons.go:238] Setting addon storage-provisioner=true in "functional-367186"
	I1008 14:43:47.179280  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.179202  118459 addons.go:69] Setting default-storageclass=true in profile "functional-367186"
	I1008 14:43:47.179355  118459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-367186"
	I1008 14:43:47.179643  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.179723  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.181696  118459 out.go:179] * Verifying Kubernetes components...
	I1008 14:43:47.182986  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:47.197887  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.198131  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.198516  118459 addons.go:238] Setting addon default-storageclass=true in "functional-367186"
	I1008 14:43:47.198560  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.198956  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.199610  118459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:43:47.201208  118459 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.201228  118459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:43:47.201280  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.224257  118459 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.224285  118459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:43:47.224346  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.226258  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.244099  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.285014  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:47.298345  118459 node_ready.go:35] waiting up to 6m0s for node "functional-367186" to be "Ready" ...
	I1008 14:43:47.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.298934  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:47.336898  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.352323  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.393808  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.393854  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.393886  118459 retry.go:31] will retry after 231.755958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407397  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.407475  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407496  118459 retry.go:31] will retry after 329.539024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.626786  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.679746  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.679800  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.679850  118459 retry.go:31] will retry after 393.16896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.738034  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.790656  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.792936  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.792970  118459 retry.go:31] will retry after 318.025551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.799129  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.799197  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.073934  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.111484  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.127850  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.127921  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.127943  118459 retry.go:31] will retry after 836.309595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.162277  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.164855  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.164886  118459 retry.go:31] will retry after 780.910281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.299211  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.299650  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.799557  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.799964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.946262  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.964996  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.998239  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.000519  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.000554  118459 retry.go:31] will retry after 896.283262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.018974  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.019036  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.019061  118459 retry.go:31] will retry after 1.078166751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.299460  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.299536  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.299868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:49.299950  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:49.799616  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.799720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.800392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:49.897595  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:49.950387  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.950427  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.950463  118459 retry.go:31] will retry after 1.484279714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.097767  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:50.149377  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:50.149421  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.149465  118459 retry.go:31] will retry after 1.600335715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.298625  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:50.798695  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.798808  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.799174  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.298904  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.435639  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:51.489347  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.491876  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.491909  118459 retry.go:31] will retry after 2.200481753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.750291  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:51.799001  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.799398  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:51.799489  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:51.803486  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.803590  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.803616  118459 retry.go:31] will retry after 2.262800355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:52.299098  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.299177  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.299542  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:52.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.799399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.799764  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.298621  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.299048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.692777  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:53.745144  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:53.745204  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.745229  118459 retry.go:31] will retry after 3.527117876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.799392  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.799480  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.799857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:53.799918  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:54.067271  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:54.118417  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:54.118478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.118503  118459 retry.go:31] will retry after 3.862999365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.298755  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.298838  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.299219  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:54.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.799074  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.298863  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.298942  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.299253  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.798989  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.799066  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.799421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:56.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:56.299793  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:56.799548  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.799947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.272978  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:57.298541  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.298620  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.298918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.321958  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:57.324558  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.324587  118459 retry.go:31] will retry after 4.383767223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.799184  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.799301  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.799689  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.982062  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:58.032702  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:58.035195  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.035237  118459 retry.go:31] will retry after 5.903970239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:58.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:58.799473  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:59.298999  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.299078  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.299479  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:59.799062  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.799145  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.299550  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.799200  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.799275  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.799625  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:00.799685  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:01.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.299385  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.299774  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:01.709356  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:01.759088  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:01.761882  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.761921  118459 retry.go:31] will retry after 6.257319935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.799124  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.799237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.299268  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.299716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.799390  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.799502  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.799880  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:02.799960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:03.299492  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.299563  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.299925  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.798665  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.798754  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.940379  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:03.990275  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:03.993084  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:03.993122  118459 retry.go:31] will retry after 4.028920288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:04.298653  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.299341  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:04.798956  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.799033  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:05.299051  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.299176  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.299598  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:05.299657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:05.799285  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.799356  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.799725  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.299393  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.299841  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.799593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.799944  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.299053  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.798714  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.798786  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.799261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:07.799325  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:08.019559  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:08.023109  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:08.072023  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.074947  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074963  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074982  118459 retry.go:31] will retry after 6.922745297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.076401  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.076428  118459 retry.go:31] will retry after 5.441570095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.298802  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.299153  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:08.799104  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.799539  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.299229  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.299310  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.299686  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.799379  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.799472  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.799807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:09.799869  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:10.299531  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.299603  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.299958  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:10.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.799011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.298647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.299123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.798895  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.799225  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:12.298842  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.298915  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:12.299310  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:12.798893  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.299008  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.518328  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:13.572977  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:13.573020  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.573038  118459 retry.go:31] will retry after 15.052611026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.798632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.798973  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.298894  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.299223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.798866  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.798962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:14.799351  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:14.998673  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:15.051035  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:15.051092  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.051116  118459 retry.go:31] will retry after 7.550335313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.299491  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.299568  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:15.799546  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.799646  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.800035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.298586  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.299006  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:17.298969  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.299043  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:17.299467  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:17.798964  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.299415  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.799349  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.799698  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:19.299431  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.299558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.299972  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:19.300047  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:19.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.299042  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.798691  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.798998  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.298572  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.298698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.299121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:21.799149  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:22.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:22.602557  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:22.653552  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:22.656108  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.656138  118459 retry.go:31] will retry after 31.201355729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.799459  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.799558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.799901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.299026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.798988  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.799061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:23.799539  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:24.299048  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.299131  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.299558  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:24.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.799285  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.799622  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.299437  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.299594  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.299994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.799056  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:26.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.298737  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.299066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:26.299138  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:26.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.799032  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.298934  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.299032  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.798977  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:28.298998  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.299130  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.299524  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:28.299599  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:28.625918  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:28.675593  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:28.678080  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.678122  118459 retry.go:31] will retry after 23.952219527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.799477  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.799570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.799970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.298589  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.298685  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.798713  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.798787  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.799221  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.298792  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.299229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.798891  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.799335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:30.799398  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:31.298936  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.299373  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:31.798930  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.799039  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.299072  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.799097  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.799529  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:32.799596  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:33.299230  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.299325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.299740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:33.798515  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.798587  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.798936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.299656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.798590  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.798664  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.799020  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:35.298588  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.298666  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.299052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:35.299143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:35.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.299007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.798626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:37.298948  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.299051  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:37.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:37.799006  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.799086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.799417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.299020  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.299100  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.299469  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.799369  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.799927  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:39.299580  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.299693  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.300082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:39.300150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:39.798611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.799046  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.298592  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.298670  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.798637  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.299138  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.798729  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.798815  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.799152  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:41.799215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:42.298723  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.298799  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.299170  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:42.798731  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.798836  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.799203  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.298908  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.299278  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.799167  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.799250  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:43.799661  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:44.299314  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.299416  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.299827  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:44.799577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.799657  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.800048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.298599  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.299047  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:46.298671  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.299126  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:46.299191  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:46.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.798850  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.799223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.299119  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.299231  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.299611  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.799336  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.799765  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:48.299501  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.299582  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.299947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:48.300006  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:48.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.798729  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.298752  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.798901  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.798982  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.298921  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.299003  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.798955  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.799416  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:50.799534  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:51.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.299214  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.299601  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:51.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.799388  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.799753  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.299413  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.299503  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.299839  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.631482  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:52.682310  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:52.684872  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.684901  118459 retry.go:31] will retry after 32.790446037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.799279  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.799368  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.799719  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:52.799778  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:53.299429  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.299873  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.799081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.858347  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:53.912029  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:53.912083  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:53.912107  118459 retry.go:31] will retry after 18.370397631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:54.298601  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:54.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.799095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:55.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.299226  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:55.299302  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:55.798903  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.798996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.298927  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.299347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:57.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.299509  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:57.299581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:57.799169  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.799283  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.299318  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.299391  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.299772  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.799563  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.799658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.800017  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.298677  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.299050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.798757  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:59.799217  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:00.298721  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.298821  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:00.798884  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.799337  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.298871  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.298949  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.299314  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.798878  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.799285  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:01.799345  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:02.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.299353  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:02.798928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.799012  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.799359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.298939  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.299014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.799249  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:03.799744  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:04.299367  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.299468  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.299800  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:04.799513  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.799614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.798722  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.799201  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:06.298786  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.298890  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.299232  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:06.299292  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:06.798807  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.798900  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.799230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.299263  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.299613  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.799343  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.799420  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.799763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:08.299428  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.299527  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.299872  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:08.299937  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:08.798593  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.798667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.799001  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.298582  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.798617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.798698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.298622  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.799101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:10.799164  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:11.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:11.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.282739  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:45:12.299378  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.299488  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.299877  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.333950  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336622  118459 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:12.799135  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.799209  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:12.799657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:13.299289  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.299709  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:13.798861  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.798943  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.298849  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.298932  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.299258  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.799040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:15.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.299098  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:15.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:15.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.799155  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.799530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.299229  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.299576  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.799320  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.799402  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.799740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.298566  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:17.799082  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:18.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.298700  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:18.798851  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.798935  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.298852  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.299298  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.798906  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.798988  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.799347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:19.799406  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:20.298933  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.299355  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:20.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.799025  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.799390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.298968  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.299041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.799011  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.799369  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:22.299008  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.299101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.299519  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:22.299580  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:22.799213  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.799289  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.299390  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.299767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.799544  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.799617  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.799951  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.298561  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.298641  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.798607  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.799048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:24.799112  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:25.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:25.476423  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:45:25.531081  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531142  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531259  118459 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:25.534376  118459 out.go:179] * Enabled addons: 
	I1008 14:45:25.535655  118459 addons.go:514] duration metric: took 1m38.356657385s for enable addons: enabled=[]
	I1008 14:45:25.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.798640  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.798959  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.298537  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.299011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.798610  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.798686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:26.799185  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:27.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.299111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:27.799210  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.799306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.799715  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.299395  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.299520  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.299905  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.798594  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:29.298630  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:29.299127  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:29.798717  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.798816  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.799196  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.299218  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.798893  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.799252  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:31.298834  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.299230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:31.299294  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:31.798829  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.798912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.799264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.298806  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.299262  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.799271  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:33.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.298966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.299345  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:33.299417  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:33.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.799654  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.299321  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.299423  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.299763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.799422  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.799533  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.799902  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.298559  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.298639  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.798592  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:35.799128  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:36.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.299156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:36.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.798779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.799148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.299530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:37.799713  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:38.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.299405  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.299766  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:38.799558  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.799667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.800040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.298689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.798644  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.799106  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:40.298658  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.299095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:40.299169  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:40.798657  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.799078  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.298629  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.798741  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.799102  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:42.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.299168  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:42.299237  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:42.798716  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.798788  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.298801  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.799130  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.799591  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:44.299252  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.299339  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.299712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:44.299773  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:44.799365  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.799825  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.299172  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.299287  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.299676  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.799167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.298781  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.298881  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.299294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.798856  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.798931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.799293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:46.799356  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:47.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.299246  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:47.799327  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.799406  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.299439  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.299542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.299919  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.798704  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:49.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:49.299162  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:49.798684  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.799141  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.298714  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.298795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.299144  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.798776  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.798853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.799207  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:51.298712  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.298791  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.299166  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:51.299231  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:51.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.798829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.799189  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.298885  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.299246  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.799319  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.298699  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.298776  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.299137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.799143  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.799505  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:53.799579  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:54.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.299276  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.299636  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:54.799331  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.799784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.299472  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.798585  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.798665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:56.298627  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:56.299148  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:56.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.799077  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.299523  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.799274  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.799642  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:58.299356  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.299473  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.299961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:58.300023  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:58.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.799059  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.298721  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.798755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.798766  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.798873  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.799228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:00.799293  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:01.298587  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.299023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:01.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.798731  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.799123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.298698  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.799202  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:03.298750  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.298833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:03.299244  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:03.799037  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.799122  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.799491  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.299167  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.299249  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.299630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.799414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.799795  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:05.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.299956  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:05.300019  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:05.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.298578  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.799117  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.299118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.299493  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.799139  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.799496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:07.799569  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:08.299035  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.299126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:08.799377  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.799812  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.298529  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.298607  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.298931  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.799111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:10.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.299130  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:10.299230  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:10.798708  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.798795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.298650  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.298984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.798571  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.798994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.299013  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.798609  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.799038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:12.799099  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:13.298602  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:13.798949  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.799028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.799365  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.299036  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.299417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.798995  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:14.799507  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:15.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:15.798739  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.299195  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.798747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.799211  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:17.299171  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.299252  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.299620  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:17.299687  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:17.799351  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.799429  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.799815  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.299581  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.299663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.300026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.798911  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.798995  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.799361  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.299017  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.798976  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.799059  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:19.799484  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:20.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.299063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.299433  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:20.799000  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.799073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.799422  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.299052  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.798986  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.799475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:21.799540  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:22.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.299073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.299421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:22.799016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.799089  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.299012  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.299086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.799352  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.799434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.799781  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:23.799842  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:24.299407  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.299843  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:24.799556  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.799961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.298635  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.298981  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.799082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:26.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.299076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:26.299150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:26.798664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.298937  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.299013  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.299343  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.798999  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:28.298903  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.298998  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.299342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:28.299409  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:28.799216  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.799293  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.299414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.299824  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.799545  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.298574  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.298654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.299010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.799063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:30.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:31.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.299084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:31.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.799089  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.298660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.798689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.798772  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.799169  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:32.799234  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:33.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:33.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.799101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.299040  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.299520  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.799151  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.799224  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.799552  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:34.799606  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:35.299196  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.299279  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:35.799293  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.799369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.799727  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.299400  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.299857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.799528  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.799601  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:36.799998  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:37.298659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.299094  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:37.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.798758  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.799112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.298715  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.298793  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.299167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.799005  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.799470  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:39.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.299482  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:39.299547  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:39.799057  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.799149  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.299162  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.299239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.299588  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.799254  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.799325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.799695  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:41.299348  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.299424  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.299798  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:41.299888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:41.799486  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.799571  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.799908  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.299014  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.798601  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.799021  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.298597  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.298675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.299015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.798718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:43.799158  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:44.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.299079  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:44.798646  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.298651  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.298724  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.798658  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:45.799190  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:46.298664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.298740  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.299081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:46.798660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.299010  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.299116  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.299468  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.799515  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:47.799577  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:48.299145  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.299237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.299586  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:48.799465  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.799540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.799893  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.299567  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.300081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.798774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.799156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:50.298747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.298852  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:50.299334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:50.798849  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.798940  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.799370  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.298974  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.299474  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.799088  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.799617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:52.299319  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.299399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.299750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:52.299815  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:52.799425  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.799532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.799968  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.298596  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.299057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.798951  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.799031  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.799358  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.298997  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.299141  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.299485  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.799052  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:54.799557  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:55.299016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.299471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:55.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.799427  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.299476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.799071  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:57.299385  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.299507  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.299911  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:57.299974  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:57.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.799954  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.298614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.298971  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.798638  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.798717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.298676  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.299184  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.798757  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.798865  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.799194  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:59.799261  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:00.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.299242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:00.798799  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.798882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.298869  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.298960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.299308  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.798868  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.798957  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:01.799395  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:02.298910  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.299004  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.299367  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:02.798967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.799471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.299109  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.799358  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.799437  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.799820  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:03.799888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:04.299467  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.299570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:04.798525  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.798605  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.798957  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.299064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:06.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.298755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.299139  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:06.299201  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:06.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.798775  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.799212  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.299173  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.299680  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.799348  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.799431  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.799818  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:08.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.299559  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.299887  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:08.299953  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:08.798622  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.298666  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.298743  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.299110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.798767  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.298823  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.299192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.799192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:10.799264  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:11.298772  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.298854  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.299193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:11.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.798887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.799274  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.298832  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.298912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.299277  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.798808  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.798896  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.799275  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:12.799334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:13.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.298906  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:13.799086  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.799171  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.799549  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.299233  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.299317  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.299685  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.799321  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.799395  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.799748  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:14.799845  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:15.299364  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.299434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.299756  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:15.799417  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.799861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.299614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.299915  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.798573  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.799007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:17.298827  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.299306  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:17.299381  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:17.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.798968  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.799302  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.298694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.799418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:19.299079  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.299153  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.299571  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:19.299630  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:19.799185  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.799262  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.799651  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.299313  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.299398  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.299801  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.800024  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:21.799168  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:22.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.298730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:22.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.798732  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.298704  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.298779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.299115  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.798943  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.799042  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:23.799509  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:24.298964  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.299040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.299390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:24.798583  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.798690  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.298624  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.299069  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.798756  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:26.298675  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:26.299192  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:26.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.799142  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.299005  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.299090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.299419  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.799045  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.799137  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.799544  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:28.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.299617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:28.299678  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:28.799473  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.799560  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.799899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.299985  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.798622  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.798983  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.298553  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.298632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.298995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.798697  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:30.799179  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:31.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.298695  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.299073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:31.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.298977  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.798588  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.798663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.799041  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:33.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:33.299097  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:33.798957  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.299095  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.299494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:35.299241  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:35.299795  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:35.799437  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.799530  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.799892  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.299548  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.798599  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.798674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.298967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.299050  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.299424  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.799403  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:37.799496  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:38.298988  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.299067  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.299408  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:38.799345  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.799481  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.799859  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.299510  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.299593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.299976  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:40.298711  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.298796  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:40.299245  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:40.798752  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.798837  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.799193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.298853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.299237  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.798946  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.799303  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:42.298889  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.298962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.299322  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:42.299384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:42.798944  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.298977  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.299047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.299368  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.799221  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.799302  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.799663  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:44.299294  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.299790  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:44.299872  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:44.799433  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.799542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.799888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.299563  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.299636  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.299993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:46.299512  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.299633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.300025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:46.300089  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:46.798790  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.798884  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.799229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.299087  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.299184  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.299563  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.798932  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.799009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.799428  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.299029  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.299106  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.299501  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.799380  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.799486  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.799833  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:48.799903  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:49.299564  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.300007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:49.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.799052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:51.298640  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.299093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:51.299156  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:51.798681  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.798761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.799132  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.298710  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.298829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.798883  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.799265  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:53.298856  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.298931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:53.299362  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:53.799190  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.799266  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.299296  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.799472  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.799553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.799952  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.298584  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.298660  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.798627  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.798713  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:55.799173  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:56.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.298834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:56.798788  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.798866  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.799242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.299122  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.299496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.799239  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.799714  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:57.799774  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:58.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.299464  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.299809  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:58.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.798672  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.799025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.298591  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.298674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.798618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.798694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.799057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:00.298633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:00.299182  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:00.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.799076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.298687  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.298762  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.299124  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.798694  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.798782  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.799125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.298730  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.298807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.299143  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:02.799242  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:03.298766  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.299191  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:03.799090  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.799168  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.799556  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.798656  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:05.298725  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.298803  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.299148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:05.299215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:05.798756  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.798859  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.298856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.299228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.799046  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.799394  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:07.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.299273  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:07.299732  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:07.799538  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.799609  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.799950  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.299147  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.799521  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:09.299345  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.299428  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.299805  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:09.299871  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:09.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.298815  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.298898  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.799063  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.799142  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.799548  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:11.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.299512  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.299861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:11.299938  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:11.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.298858  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.298934  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.298773  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.298847  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.799118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.799495  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:13.799564  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:14.299338  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.299418  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.299784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:14.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.798633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.798966  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.299111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.798836  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:16.299034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.299119  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.299472  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:16.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:16.799263  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.799716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.299984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.799093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.298690  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.298768  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.299127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.798926  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.799002  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:18.799405  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:19.298954  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.299028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.299371  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:19.798980  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.299425  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.798994  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.799140  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.799508  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:20.799581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:21.299202  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.299281  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.299656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:21.799334  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.799412  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.799779  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.299478  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.299564  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.798566  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.798990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:23.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.298653  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:23.299069  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:23.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.799024  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.298958  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.299387  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.799037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:25.299272  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.299346  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:25.299785  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:25.799564  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.799644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.800010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.298851  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.299197  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.798945  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.799020  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:27.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.299762  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:27.299828  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:27.799408  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.799498  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.799868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.299505  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.299589  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.299938  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.798710  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.799066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.298603  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.299072  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.799067  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:29.799143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:30.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.298723  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:30.798639  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.798719  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.298623  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:32.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.299071  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:32.299152  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:32.798666  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.798747  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.799135  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.298695  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.798993  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.799069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:34.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.299476  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.299807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:34.299873  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:34.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.798675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.298918  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.299259  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.799014  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.299386  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.299754  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.798548  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.798627  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:36.799056  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:37.298853  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.298929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.299261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:37.798581  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.298605  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.799034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:38.799603  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:39.299424  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.299514  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.299862  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:39.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.799092  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.298907  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.298997  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.299335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.799204  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.799649  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:40.799728  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:41.299541  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.299632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.299970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:41.798741  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.798831  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.799187  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.298986  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.299069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.299473  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.799301  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.799376  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.799728  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:42.799794  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:43.298557  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.298631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.299030  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:43.798919  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.799001  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.799377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.299220  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.299306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.299666  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.799308  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.799379  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.799750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:45.299391  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.299504  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.299837  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:45.299906  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:45.799476  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.799562  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.799953  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.298535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.298610  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.298988  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.798683  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.799014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:47.799500  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:48.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.299084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.299436  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:48.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.799397  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.799757  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.299469  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.299546  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.798748  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.799121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:50.298729  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.298811  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.299173  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:50.299238  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:50.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.798856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.799248  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.298812  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.298897  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.798948  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:52.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.299070  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:52.299545  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:52.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.799504  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.299161  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.299264  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.299675  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.799435  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.799534  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.799875  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.298718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.299112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.798929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.799294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:54.799357  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:55.299157  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.299235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.299606  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:55.799386  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.799470  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.799852  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.299065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.798779  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.798868  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.799243  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:57.299138  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.299227  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.299600  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:57.299666  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:57.799470  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.799545  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.799918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.298679  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.298761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.299149  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.799015  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.799090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:59.299293  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.299392  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.299742  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:59.299808  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:59.798577  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.299326  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.799153  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:01.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.299553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.299898  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:01.299965  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:01.798701  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.298874  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.299315  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.799145  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.799228  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.799568  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.299513  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.798557  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.799073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:03.799140  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:04.298885  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.298976  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.299401  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:04.799261  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.799710  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.299549  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.299642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.300048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.798774  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.798849  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.799206  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:05.799268  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:06.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.299053  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:06.799240  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.799328  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.799681  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.299414  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.299532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.799044  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:08.298825  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:08.299350  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:08.799137  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.799221  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.799589  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.299540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.299921  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.799064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:10.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.298925  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.299313  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:10.299380  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:10.799149  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.799223  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.799572  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.299419  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.299531  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.299928  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.798698  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.798777  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.799140  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:12.298875  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.299357  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:12.299428  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:12.799215  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.799641  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.299434  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.299538  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.299901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.798658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.798993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.298718  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.298806  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.299190  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.798984  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.799423  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:14.799511  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:15.299254  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.299343  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:15.798574  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.798655  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.298700  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.298800  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.299145  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.799300  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:17.299095  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.299193  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.299535  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:17.299597  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:17.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.799337  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.299759  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.799524  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.799598  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:19.299552  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.299638  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:19.300058  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:19.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.299002  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.798789  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.298846  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.298952  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.299301  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.799159  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.799239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.799630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:21.799697  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:22.299522  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.299619  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.299991  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:22.798758  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.798834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.799181  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.299061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.299437  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.799357  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.799433  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.799786  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:23.799850  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:24.298547  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:24.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.798835  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.799161  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.298901  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.298996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.299334  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.799154  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.799236  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.799604  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:26.299399  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.299521  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.299888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:26.299960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:26.798629  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.799035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.298805  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.298901  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.299256  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.798972  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.799378  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.299186  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.799616  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.800091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:28.800170  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:29.298943  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.299021  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.299362  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:29.799176  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.799282  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.299485  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.299566  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.299899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.798586  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:31.298771  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.299157  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:31.299210  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:31.798882  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.798989  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.299195  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.299278  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.299631  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.799405  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.799515  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.799866  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.298635  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.798843  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.798922  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.799266  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:33.799342  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:34.299019  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.299432  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:34.799270  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.799358  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.799712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.299543  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.299995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.798712  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.798807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.799171  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:36.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.298739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:36.299199  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:36.798682  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.299039  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.299475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.799319  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.799403  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.298633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.298999  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.799060  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:38.799123  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:39.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.298919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:39.799162  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.799585  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.299409  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.299508  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.299869  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.799084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:40.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:41.298831  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.298921  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:41.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.299467  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.299819  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.798568  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.798643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.798984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:43.298738  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.298822  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:43.299318  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:43.799035  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.799483  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.299382  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.299773  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.798575  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.799012  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.298748  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.298824  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.299159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.798886  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.798960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.799321  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:45.799384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:46.299022  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.299330  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:46.798742  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.798830  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.799234  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:47.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:49:47.299208  118459 node_ready.go:38] duration metric: took 6m0.000826952s for node "functional-367186" to be "Ready" ...
	I1008 14:49:47.302039  118459 out.go:203] 
	W1008 14:49:47.303804  118459 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 14:49:47.303820  118459 out.go:285] * 
	W1008 14:49:47.305511  118459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:49:47.306606  118459 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 14:49:54 functional-367186 crio[2943]: time="2025-10-08T14:49:54.378804527Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=481a1102-da94-4485-94f4-95441e868bc7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:54 functional-367186 crio[2943]: time="2025-10-08T14:49:54.403789748Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=601c8d62-3979-47b2-949e-01d06451fb80 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:54 functional-367186 crio[2943]: time="2025-10-08T14:49:54.40395032Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=601c8d62-3979-47b2-949e-01d06451fb80 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:54 functional-367186 crio[2943]: time="2025-10-08T14:49:54.403993688Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=601c8d62-3979-47b2-949e-01d06451fb80 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.083405423Z" level=info msg="Checking image status: minikube-local-cache-test:functional-367186" id=18ec89ea-331d-4853-93e8-160d056c7e6b name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.108902684Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-367186" id=f237be11-ca3d-46e6-99de-d175fc902cd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.109018359Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-367186 not found" id=f237be11-ca3d-46e6-99de-d175fc902cd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.109048866Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-367186 found" id=f237be11-ca3d-46e6-99de-d175fc902cd7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.133246972Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-367186" id=bff13eef-95d7-4545-b135-e7d3b9675cfd name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.133380439Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-367186 not found" id=bff13eef-95d7-4545-b135-e7d3b9675cfd name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.133414109Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-367186 found" id=bff13eef-95d7-4545-b135-e7d3b9675cfd name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:56 functional-367186 crio[2943]: time="2025-10-08T14:49:56.859916314Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4b0bc182-7ddd-471e-9ca5-d328101e8fb5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.151436536Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1a55bfcb-d770-42c4-9a24-0e0a23091fca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.151636844Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1a55bfcb-d770-42c4-9a24-0e0a23091fca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.151677329Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1a55bfcb-d770-42c4-9a24-0e0a23091fca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.593958023Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=45253d55-fa6d-48b3-9c33-97a29e143c87 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.594066719Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=45253d55-fa6d-48b3-9c33-97a29e143c87 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.594094528Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=45253d55-fa6d-48b3-9c33-97a29e143c87 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.618730454Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7a79dc20-bae7-494b-979f-e9b900db7ca7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.618879287Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7a79dc20-bae7-494b-979f-e9b900db7ca7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.618913322Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7a79dc20-bae7-494b-979f-e9b900db7ca7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.642984111Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=27a26596-df15-4422-b397-5213400c194d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.643106269Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=27a26596-df15-4422-b397-5213400c194d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.643144425Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=27a26596-df15-4422-b397-5213400c194d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:58 functional-367186 crio[2943]: time="2025-10-08T14:49:58.090211707Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=aedb1671-958e-490e-8b22-b06bf378bfd2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:49:59.465218    5298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:59.465806    5298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:59.467340    5298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:59.467787    5298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:49:59.468986    5298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 14:49:59 up  2:32,  0 user,  load average: 0.42, 0.12, 0.47
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 14:49:52 functional-367186 kubelet[1801]:  > podSandboxID="6bb0846b3d956cef333ada694b03e76cbfe5e0591236f02da43659a5e4ee4ab6"
	Oct 08 14:49:52 functional-367186 kubelet[1801]: E1008 14:49:52.463204    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:52 functional-367186 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c58427f58fdd58b4fdb4fadaedd99fdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:52 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:52 functional-367186 kubelet[1801]: E1008 14:49:52.463265    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c58427f58fdd58b4fdb4fadaedd99fdb"
	Oct 08 14:49:53 functional-367186 kubelet[1801]: E1008 14:49:53.436600    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:53 functional-367186 kubelet[1801]: E1008 14:49:53.462161    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:53 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:53 functional-367186 kubelet[1801]:  > podSandboxID="c0e5f3cd2b90a2545cb343765bc3b9be24372f306973786fac682f615775a4ff"
	Oct 08 14:49:53 functional-367186 kubelet[1801]: E1008 14:49:53.462297    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:53 functional-367186 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:53 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:53 functional-367186 kubelet[1801]: E1008 14:49:53.462337    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 14:49:56 functional-367186 kubelet[1801]: E1008 14:49:56.115790    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 14:49:56 functional-367186 kubelet[1801]: I1008 14:49:56.330389    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 14:49:56 functional-367186 kubelet[1801]: E1008 14:49:56.330779    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 14:49:57 functional-367186 kubelet[1801]: E1008 14:49:57.460164    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.436100    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.460419    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:59 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:59 functional-367186 kubelet[1801]:  > podSandboxID="4f5c4547ba25f8047b1a01ec096a800bad6487d4d0d0fe8fd4a152424b0efbf9"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.460550    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:59 functional-367186 kubelet[1801]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:59 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.460587    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (298.193515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-367186 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-367186 get pods: exit status 1 (98.64154ms)

                                                
                                                
** stderr ** 
	E1008 14:50:00.422163  124405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:50:00.422576  124405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:50:00.423991  124405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:50:00.424331  124405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 14:50:00.425975  124405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-367186 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (290.661792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-526605 --log_dir /tmp/nospam-526605 pause                                                              │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                              │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ -p functional-367186 --alsologtostderr -v=8                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:43 UTC │                     │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.1                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.3                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:latest                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add minikube-local-cache-test:functional-367186                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache delete minikube-local-cache-test:functional-367186                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl images                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ cache   │ functional-367186 cache reload                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ kubectl │ functional-367186 kubectl -- --context functional-367186 get pods                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:43:43
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:43:43.627861  118459 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:43:43.627954  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.627958  118459 out.go:374] Setting ErrFile to fd 2...
	I1008 14:43:43.627962  118459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:43:43.628171  118459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:43:43.628614  118459 out.go:368] Setting JSON to false
	I1008 14:43:43.629495  118459 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8775,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:43:43.629593  118459 start.go:141] virtualization: kvm guest
	I1008 14:43:43.631500  118459 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:43:43.632767  118459 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:43:43.632773  118459 notify.go:220] Checking for updates...
	I1008 14:43:43.634937  118459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:43:43.636218  118459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:43.640666  118459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:43:43.642185  118459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:43:43.643421  118459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:43:43.644930  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:43.645039  118459 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:43:43.667985  118459 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:43:43.668119  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.723136  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.713080092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.723287  118459 docker.go:318] overlay module found
	I1008 14:43:43.725936  118459 out.go:179] * Using the docker driver based on existing profile
	I1008 14:43:43.727069  118459 start.go:305] selected driver: docker
	I1008 14:43:43.727087  118459 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.727171  118459 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:43:43.727263  118459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:43:43.781426  118459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 14:43:43.772365606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:43:43.782086  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:43.782179  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:43.782243  118459 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:43.784039  118459 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:43:43.785148  118459 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:43:43.786245  118459 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:43:43.787146  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:43.787178  118459 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:43:43.787189  118459 cache.go:58] Caching tarball of preloaded images
	I1008 14:43:43.787237  118459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:43:43.787273  118459 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:43:43.787283  118459 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:43:43.787359  118459 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:43:43.806536  118459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:43:43.806562  118459 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:43:43.806584  118459 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:43:43.806623  118459 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:43:43.806704  118459 start.go:364] duration metric: took 49.444µs to acquireMachinesLock for "functional-367186"
	I1008 14:43:43.806736  118459 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:43:43.806747  118459 fix.go:54] fixHost starting: 
	I1008 14:43:43.806975  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:43.822750  118459 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:43:43.822776  118459 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:43:43.824577  118459 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:43:43.824603  118459 machine.go:93] provisionDockerMachine start ...
	I1008 14:43:43.824673  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:43.841160  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:43.841463  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:43.841483  118459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:43:43.985591  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:43.985624  118459 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:43:43.985682  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.003073  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.003294  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.003316  118459 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:43:44.156671  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:43:44.156765  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.173583  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.173820  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.173845  118459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:43:44.319171  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:43:44.319200  118459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:43:44.319238  118459 ubuntu.go:190] setting up certificates
	I1008 14:43:44.319253  118459 provision.go:84] configureAuth start
	I1008 14:43:44.319306  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:44.337134  118459 provision.go:143] copyHostCerts
	I1008 14:43:44.337168  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337204  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:43:44.337226  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:43:44.337295  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:43:44.337373  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337398  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:43:44.337405  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:43:44.337431  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:43:44.337503  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337524  118459 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:43:44.337531  118459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:43:44.337557  118459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:43:44.337611  118459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:43:44.449681  118459 provision.go:177] copyRemoteCerts
	I1008 14:43:44.449756  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:43:44.449792  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.466984  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:44.569881  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 14:43:44.569953  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:43:44.587517  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 14:43:44.587583  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:43:44.605065  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 14:43:44.605124  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:43:44.622323  118459 provision.go:87] duration metric: took 303.055536ms to configureAuth
	I1008 14:43:44.622354  118459 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:43:44.622537  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:44.622644  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.639387  118459 main.go:141] libmachine: Using SSH client type: native
	I1008 14:43:44.639612  118459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:43:44.639636  118459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:43:44.900547  118459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:43:44.900571  118459 machine.go:96] duration metric: took 1.07595926s to provisionDockerMachine
	I1008 14:43:44.900586  118459 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:43:44.900600  118459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:43:44.900655  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:43:44.900706  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:44.917783  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.020925  118459 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:43:45.024356  118459 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1008 14:43:45.024381  118459 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1008 14:43:45.024389  118459 command_runner.go:130] > VERSION_ID="12"
	I1008 14:43:45.024395  118459 command_runner.go:130] > VERSION="12 (bookworm)"
	I1008 14:43:45.024402  118459 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1008 14:43:45.024406  118459 command_runner.go:130] > ID=debian
	I1008 14:43:45.024410  118459 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1008 14:43:45.024415  118459 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1008 14:43:45.024420  118459 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1008 14:43:45.024512  118459 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:43:45.024537  118459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:43:45.024550  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:43:45.024614  118459 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:43:45.024709  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:43:45.024722  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 14:43:45.024832  118459 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:43:45.024842  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> /etc/test/nested/copy/98900/hosts
	I1008 14:43:45.024895  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:43:45.032438  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:45.049657  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:43:45.066943  118459 start.go:296] duration metric: took 166.34143ms for postStartSetup
	I1008 14:43:45.067016  118459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:43:45.067050  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.084921  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.184592  118459 command_runner.go:130] > 50%
	I1008 14:43:45.184676  118459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:43:45.188918  118459 command_runner.go:130] > 148G
	I1008 14:43:45.189157  118459 fix.go:56] duration metric: took 1.382403598s for fixHost
	I1008 14:43:45.189184  118459 start.go:83] releasing machines lock for "functional-367186", held for 1.382467794s
	I1008 14:43:45.189256  118459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:43:45.206786  118459 ssh_runner.go:195] Run: cat /version.json
	I1008 14:43:45.206834  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.206924  118459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:43:45.207047  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:45.224940  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.226308  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:45.323475  118459 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1008 14:43:45.323661  118459 ssh_runner.go:195] Run: systemctl --version
	I1008 14:43:45.374536  118459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 14:43:45.376350  118459 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1008 14:43:45.376387  118459 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1008 14:43:45.376484  118459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:43:45.412862  118459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 14:43:45.417295  118459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 14:43:45.417656  118459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:43:45.417717  118459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:43:45.425598  118459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:43:45.425618  118459 start.go:495] detecting cgroup driver to use...
	I1008 14:43:45.425645  118459 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:43:45.425686  118459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:43:45.440680  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:43:45.452844  118459 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:43:45.452899  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:43:45.466598  118459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:43:45.477998  118459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:43:45.564577  118459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:43:45.653273  118459 docker.go:234] disabling docker service ...
	I1008 14:43:45.653343  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:43:45.667540  118459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:43:45.679916  118459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:43:45.764673  118459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:43:45.852326  118459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:43:45.864944  118459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:43:45.878738  118459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 14:43:45.878793  118459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:43:45.878844  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.887987  118459 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:43:45.888052  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.896857  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.905895  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.914639  118459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:43:45.922953  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.931880  118459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.940059  118459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:45.948635  118459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:43:45.955347  118459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 14:43:45.956050  118459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:43:45.963162  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.045488  118459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:43:46.156934  118459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:43:46.156997  118459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:43:46.161038  118459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 14:43:46.161067  118459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 14:43:46.161077  118459 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1008 14:43:46.161086  118459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.161094  118459 command_runner.go:130] > Access: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161118  118459 command_runner.go:130] > Modify: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161129  118459 command_runner.go:130] > Change: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161138  118459 command_runner.go:130] >  Birth: 2025-10-08 14:43:46.140175728 +0000
	I1008 14:43:46.161173  118459 start.go:563] Will wait 60s for crictl version
	I1008 14:43:46.161212  118459 ssh_runner.go:195] Run: which crictl
	I1008 14:43:46.164650  118459 command_runner.go:130] > /usr/local/bin/crictl
	I1008 14:43:46.164746  118459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:43:46.189255  118459 command_runner.go:130] > Version:  0.1.0
	I1008 14:43:46.189279  118459 command_runner.go:130] > RuntimeName:  cri-o
	I1008 14:43:46.189294  118459 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1008 14:43:46.189299  118459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 14:43:46.189317  118459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:43:46.189365  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.215704  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.215734  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.215741  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.215746  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.215750  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.215755  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.215762  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.215770  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.215806  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.215819  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.215825  118459 command_runner.go:130] >      static
	I1008 14:43:46.215835  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.215846  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.215857  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.215867  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.215877  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.215885  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.215897  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.215909  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.215921  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.217136  118459 ssh_runner.go:195] Run: crio --version
	I1008 14:43:46.243203  118459 command_runner.go:130] > crio version 1.34.1
	I1008 14:43:46.243231  118459 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1008 14:43:46.243241  118459 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1008 14:43:46.243249  118459 command_runner.go:130] >    GitTreeState:   dirty
	I1008 14:43:46.243256  118459 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1008 14:43:46.243264  118459 command_runner.go:130] >    GoVersion:      go1.24.6
	I1008 14:43:46.243272  118459 command_runner.go:130] >    Compiler:       gc
	I1008 14:43:46.243281  118459 command_runner.go:130] >    Platform:       linux/amd64
	I1008 14:43:46.243293  118459 command_runner.go:130] >    Linkmode:       static
	I1008 14:43:46.243299  118459 command_runner.go:130] >    BuildTags:
	I1008 14:43:46.243304  118459 command_runner.go:130] >      static
	I1008 14:43:46.243312  118459 command_runner.go:130] >      netgo
	I1008 14:43:46.243317  118459 command_runner.go:130] >      osusergo
	I1008 14:43:46.243327  118459 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1008 14:43:46.243336  118459 command_runner.go:130] >      seccomp
	I1008 14:43:46.243348  118459 command_runner.go:130] >      apparmor
	I1008 14:43:46.243358  118459 command_runner.go:130] >      selinux
	I1008 14:43:46.243374  118459 command_runner.go:130] >    LDFlags:          unknown
	I1008 14:43:46.243382  118459 command_runner.go:130] >    SeccompEnabled:   true
	I1008 14:43:46.243390  118459 command_runner.go:130] >    AppArmorEnabled:  false
	I1008 14:43:46.246714  118459 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:43:46.248034  118459 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:43:46.264534  118459 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:43:46.268778  118459 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1008 14:43:46.268905  118459 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:43:46.269051  118459 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:43:46.269113  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.298040  118459 command_runner.go:130] > {
	I1008 14:43:46.298059  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.298064  118459 command_runner.go:130] >     {
	I1008 14:43:46.298072  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.298077  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298082  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.298087  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298091  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298100  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.298109  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.298112  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298117  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.298121  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298138  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298146  118459 command_runner.go:130] >     },
	I1008 14:43:46.298151  118459 command_runner.go:130] >     {
	I1008 14:43:46.298164  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.298170  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298175  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.298181  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298185  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298191  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.298201  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.298207  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298210  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.298217  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298225  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298234  118459 command_runner.go:130] >     },
	I1008 14:43:46.298243  118459 command_runner.go:130] >     {
	I1008 14:43:46.298255  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.298262  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298267  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.298273  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298277  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298283  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.298293  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.298298  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298302  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.298309  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.298315  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298323  118459 command_runner.go:130] >     },
	I1008 14:43:46.298328  118459 command_runner.go:130] >     {
	I1008 14:43:46.298341  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.298350  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298359  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.298362  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298371  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298380  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.298387  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.298393  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298398  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.298408  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298417  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298425  118459 command_runner.go:130] >       },
	I1008 14:43:46.298438  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298461  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298467  118459 command_runner.go:130] >     },
	I1008 14:43:46.298472  118459 command_runner.go:130] >     {
	I1008 14:43:46.298481  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.298490  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298499  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.298507  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298514  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298521  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.298532  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.298540  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298548  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.298557  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298566  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298573  118459 command_runner.go:130] >       },
	I1008 14:43:46.298579  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298588  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298597  118459 command_runner.go:130] >     },
	I1008 14:43:46.298602  118459 command_runner.go:130] >     {
	I1008 14:43:46.298612  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.298619  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298628  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.298636  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298647  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298662  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.298676  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.298684  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298690  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.298699  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298705  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298713  118459 command_runner.go:130] >       },
	I1008 14:43:46.298725  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298735  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298744  118459 command_runner.go:130] >     },
	I1008 14:43:46.298752  118459 command_runner.go:130] >     {
	I1008 14:43:46.298762  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.298784  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298800  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.298808  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298815  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298829  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.298843  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.298851  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298860  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.298864  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298867  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298871  118459 command_runner.go:130] >     },
	I1008 14:43:46.298882  118459 command_runner.go:130] >     {
	I1008 14:43:46.298891  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.298895  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.298899  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.298903  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298907  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.298914  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.298931  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.298937  118459 command_runner.go:130] >       ],
	I1008 14:43:46.298941  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.298948  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.298952  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.298957  118459 command_runner.go:130] >       },
	I1008 14:43:46.298961  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.298967  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.298971  118459 command_runner.go:130] >     },
	I1008 14:43:46.298978  118459 command_runner.go:130] >     {
	I1008 14:43:46.298987  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.298996  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.299004  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.299025  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299035  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.299047  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.299060  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.299068  118459 command_runner.go:130] >       ],
	I1008 14:43:46.299074  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.299081  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.299087  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.299095  118459 command_runner.go:130] >       },
	I1008 14:43:46.299100  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.299108  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.299113  118459 command_runner.go:130] >     }
	I1008 14:43:46.299117  118459 command_runner.go:130] >   ]
	I1008 14:43:46.299125  118459 command_runner.go:130] > }
	I1008 14:43:46.300090  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.300109  118459 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:43:46.300168  118459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:43:46.325949  118459 command_runner.go:130] > {
	I1008 14:43:46.325970  118459 command_runner.go:130] >   "images":  [
	I1008 14:43:46.325974  118459 command_runner.go:130] >     {
	I1008 14:43:46.325985  118459 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1008 14:43:46.325990  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.325996  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1008 14:43:46.325999  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326003  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326016  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1008 14:43:46.326031  118459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1008 14:43:46.326040  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326047  118459 command_runner.go:130] >       "size":  "109379124",
	I1008 14:43:46.326055  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326063  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326068  118459 command_runner.go:130] >     },
	I1008 14:43:46.326072  118459 command_runner.go:130] >     {
	I1008 14:43:46.326083  118459 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 14:43:46.326089  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326094  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 14:43:46.326100  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326104  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326125  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 14:43:46.326136  118459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 14:43:46.326142  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326147  118459 command_runner.go:130] >       "size":  "31470524",
	I1008 14:43:46.326151  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326158  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326163  118459 command_runner.go:130] >     },
	I1008 14:43:46.326166  118459 command_runner.go:130] >     {
	I1008 14:43:46.326172  118459 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1008 14:43:46.326178  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326183  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1008 14:43:46.326188  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326192  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326201  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1008 14:43:46.326208  118459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1008 14:43:46.326213  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326219  118459 command_runner.go:130] >       "size":  "76103547",
	I1008 14:43:46.326223  118459 command_runner.go:130] >       "username":  "nonroot",
	I1008 14:43:46.326226  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326229  118459 command_runner.go:130] >     },
	I1008 14:43:46.326232  118459 command_runner.go:130] >     {
	I1008 14:43:46.326238  118459 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1008 14:43:46.326245  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326249  118459 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1008 14:43:46.326252  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326256  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326262  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1008 14:43:46.326269  118459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1008 14:43:46.326275  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326279  118459 command_runner.go:130] >       "size":  "195976448",
	I1008 14:43:46.326284  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326287  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326293  118459 command_runner.go:130] >       },
	I1008 14:43:46.326307  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326314  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326317  118459 command_runner.go:130] >     },
	I1008 14:43:46.326320  118459 command_runner.go:130] >     {
	I1008 14:43:46.326326  118459 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1008 14:43:46.326331  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326335  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1008 14:43:46.326338  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326342  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326349  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1008 14:43:46.326358  118459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1008 14:43:46.326361  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326366  118459 command_runner.go:130] >       "size":  "89046001",
	I1008 14:43:46.326369  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326373  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326378  118459 command_runner.go:130] >       },
	I1008 14:43:46.326382  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326385  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326392  118459 command_runner.go:130] >     },
	I1008 14:43:46.326395  118459 command_runner.go:130] >     {
	I1008 14:43:46.326401  118459 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1008 14:43:46.326407  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326412  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1008 14:43:46.326415  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326419  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326429  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1008 14:43:46.326436  118459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1008 14:43:46.326453  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326460  118459 command_runner.go:130] >       "size":  "76004181",
	I1008 14:43:46.326468  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326472  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326475  118459 command_runner.go:130] >       },
	I1008 14:43:46.326479  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326490  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326496  118459 command_runner.go:130] >     },
	I1008 14:43:46.326499  118459 command_runner.go:130] >     {
	I1008 14:43:46.326505  118459 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1008 14:43:46.326511  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326515  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1008 14:43:46.326518  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326522  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326531  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1008 14:43:46.326538  118459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1008 14:43:46.326543  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326548  118459 command_runner.go:130] >       "size":  "73138073",
	I1008 14:43:46.326551  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326555  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326558  118459 command_runner.go:130] >     },
	I1008 14:43:46.326561  118459 command_runner.go:130] >     {
	I1008 14:43:46.326567  118459 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1008 14:43:46.326571  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326575  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1008 14:43:46.326578  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326582  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326588  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1008 14:43:46.326611  118459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1008 14:43:46.326617  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326621  118459 command_runner.go:130] >       "size":  "53844823",
	I1008 14:43:46.326625  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326631  118459 command_runner.go:130] >         "value":  "0"
	I1008 14:43:46.326634  118459 command_runner.go:130] >       },
	I1008 14:43:46.326638  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326643  118459 command_runner.go:130] >       "pinned":  false
	I1008 14:43:46.326646  118459 command_runner.go:130] >     },
	I1008 14:43:46.326650  118459 command_runner.go:130] >     {
	I1008 14:43:46.326655  118459 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1008 14:43:46.326666  118459 command_runner.go:130] >       "repoTags":  [
	I1008 14:43:46.326673  118459 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.326676  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326680  118459 command_runner.go:130] >       "repoDigests":  [
	I1008 14:43:46.326688  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1008 14:43:46.326698  118459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1008 14:43:46.326705  118459 command_runner.go:130] >       ],
	I1008 14:43:46.326709  118459 command_runner.go:130] >       "size":  "742092",
	I1008 14:43:46.326714  118459 command_runner.go:130] >       "uid":  {
	I1008 14:43:46.326718  118459 command_runner.go:130] >         "value":  "65535"
	I1008 14:43:46.326722  118459 command_runner.go:130] >       },
	I1008 14:43:46.326726  118459 command_runner.go:130] >       "username":  "",
	I1008 14:43:46.326732  118459 command_runner.go:130] >       "pinned":  true
	I1008 14:43:46.326735  118459 command_runner.go:130] >     }
	I1008 14:43:46.326738  118459 command_runner.go:130] >   ]
	I1008 14:43:46.326740  118459 command_runner.go:130] > }
	I1008 14:43:46.326842  118459 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:43:46.326863  118459 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:43:46.326869  118459 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:43:46.326972  118459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:43:46.327030  118459 ssh_runner.go:195] Run: crio config
	I1008 14:43:46.368296  118459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 14:43:46.368332  118459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 14:43:46.368340  118459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 14:43:46.368344  118459 command_runner.go:130] > #
	I1008 14:43:46.368350  118459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 14:43:46.368356  118459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 14:43:46.368362  118459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 14:43:46.368376  118459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 14:43:46.368381  118459 command_runner.go:130] > # reload'.
	I1008 14:43:46.368392  118459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 14:43:46.368405  118459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 14:43:46.368418  118459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 14:43:46.368433  118459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 14:43:46.368458  118459 command_runner.go:130] > [crio]
	I1008 14:43:46.368472  118459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 14:43:46.368480  118459 command_runner.go:130] > # containers images, in this directory.
	I1008 14:43:46.368492  118459 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1008 14:43:46.368502  118459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 14:43:46.368514  118459 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1008 14:43:46.368525  118459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 14:43:46.368536  118459 command_runner.go:130] > # imagestore = ""
	I1008 14:43:46.368546  118459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 14:43:46.368559  118459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 14:43:46.368566  118459 command_runner.go:130] > # storage_driver = "overlay"
	I1008 14:43:46.368580  118459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 14:43:46.368587  118459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 14:43:46.368594  118459 command_runner.go:130] > # storage_option = [
	I1008 14:43:46.368599  118459 command_runner.go:130] > # ]
	I1008 14:43:46.368608  118459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 14:43:46.368621  118459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 14:43:46.368631  118459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 14:43:46.368640  118459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 14:43:46.368651  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 14:43:46.368666  118459 command_runner.go:130] > # always happen on a node reboot
	I1008 14:43:46.368678  118459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 14:43:46.368702  118459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 14:43:46.368714  118459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 14:43:46.368726  118459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 14:43:46.368736  118459 command_runner.go:130] > # version_file_persist = ""
	I1008 14:43:46.368751  118459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 14:43:46.368767  118459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 14:43:46.368775  118459 command_runner.go:130] > # internal_wipe = true
	I1008 14:43:46.368791  118459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 14:43:46.368802  118459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 14:43:46.368820  118459 command_runner.go:130] > # internal_repair = true
	I1008 14:43:46.368834  118459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 14:43:46.368847  118459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 14:43:46.368859  118459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 14:43:46.368869  118459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 14:43:46.368882  118459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 14:43:46.368891  118459 command_runner.go:130] > [crio.api]
	I1008 14:43:46.368900  118459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 14:43:46.368910  118459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 14:43:46.368921  118459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 14:43:46.368931  118459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 14:43:46.368942  118459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 14:43:46.368954  118459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 14:43:46.368963  118459 command_runner.go:130] > # stream_port = "0"
	I1008 14:43:46.368971  118459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 14:43:46.368981  118459 command_runner.go:130] > # stream_enable_tls = false
	I1008 14:43:46.368992  118459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 14:43:46.369002  118459 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 14:43:46.369012  118459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 14:43:46.369025  118459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369033  118459 command_runner.go:130] > # stream_tls_cert = ""
	I1008 14:43:46.369043  118459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 14:43:46.369055  118459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1008 14:43:46.369075  118459 command_runner.go:130] > # stream_tls_key = ""
	I1008 14:43:46.369092  118459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 14:43:46.369106  118459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 14:43:46.369121  118459 command_runner.go:130] > # automatically pick up the changes.
	I1008 14:43:46.369130  118459 command_runner.go:130] > # stream_tls_ca = ""
	I1008 14:43:46.369153  118459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369163  118459 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1008 14:43:46.369176  118459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 14:43:46.369186  118459 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1008 14:43:46.369197  118459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 14:43:46.369209  118459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 14:43:46.369219  118459 command_runner.go:130] > [crio.runtime]
	I1008 14:43:46.369229  118459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 14:43:46.369240  118459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 14:43:46.369246  118459 command_runner.go:130] > # "nofile=1024:2048"
	I1008 14:43:46.369260  118459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 14:43:46.369269  118459 command_runner.go:130] > # default_ulimits = [
	I1008 14:43:46.369275  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369288  118459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 14:43:46.369296  118459 command_runner.go:130] > # no_pivot = false
	I1008 14:43:46.369305  118459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 14:43:46.369317  118459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 14:43:46.369327  118459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 14:43:46.369338  118459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 14:43:46.369348  118459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 14:43:46.369359  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369368  118459 command_runner.go:130] > # conmon = ""
	I1008 14:43:46.369375  118459 command_runner.go:130] > # Cgroup setting for conmon
	I1008 14:43:46.369386  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 14:43:46.369393  118459 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 14:43:46.369402  118459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 14:43:46.369410  118459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 14:43:46.369421  118459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 14:43:46.369430  118459 command_runner.go:130] > # conmon_env = [
	I1008 14:43:46.369435  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369456  118459 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 14:43:46.369465  118459 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 14:43:46.369475  118459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 14:43:46.369484  118459 command_runner.go:130] > # default_env = [
	I1008 14:43:46.369489  118459 command_runner.go:130] > # ]
	I1008 14:43:46.369498  118459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 14:43:46.369516  118459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 14:43:46.369528  118459 command_runner.go:130] > # selinux = false
	I1008 14:43:46.369539  118459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 14:43:46.369555  118459 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1008 14:43:46.369564  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369570  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.369582  118459 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1008 14:43:46.369602  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369609  118459 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1008 14:43:46.369619  118459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 14:43:46.369631  118459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 14:43:46.369644  118459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 14:43:46.369653  118459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 14:43:46.369661  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369672  118459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 14:43:46.369680  118459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 14:43:46.369690  118459 command_runner.go:130] > # the cgroup blockio controller.
	I1008 14:43:46.369697  118459 command_runner.go:130] > # blockio_config_file = ""
	I1008 14:43:46.369709  118459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 14:43:46.369718  118459 command_runner.go:130] > # blockio parameters.
	I1008 14:43:46.369724  118459 command_runner.go:130] > # blockio_reload = false
	I1008 14:43:46.369735  118459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 14:43:46.369744  118459 command_runner.go:130] > # irqbalance daemon.
	I1008 14:43:46.369857  118459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 14:43:46.369873  118459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 14:43:46.369884  118459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 14:43:46.369898  118459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 14:43:46.369909  118459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 14:43:46.369924  118459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 14:43:46.369934  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.369943  118459 command_runner.go:130] > # rdt_config_file = ""
	I1008 14:43:46.369950  118459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 14:43:46.369959  118459 command_runner.go:130] > # cgroup_manager = "systemd"
	I1008 14:43:46.369968  118459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 14:43:46.369979  118459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 14:43:46.369989  118459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 14:43:46.370002  118459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 14:43:46.370011  118459 command_runner.go:130] > # will be added.
	I1008 14:43:46.370027  118459 command_runner.go:130] > # default_capabilities = [
	I1008 14:43:46.370036  118459 command_runner.go:130] > # 	"CHOWN",
	I1008 14:43:46.370044  118459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 14:43:46.370051  118459 command_runner.go:130] > # 	"FSETID",
	I1008 14:43:46.370054  118459 command_runner.go:130] > # 	"FOWNER",
	I1008 14:43:46.370062  118459 command_runner.go:130] > # 	"SETGID",
	I1008 14:43:46.370083  118459 command_runner.go:130] > # 	"SETUID",
	I1008 14:43:46.370093  118459 command_runner.go:130] > # 	"SETPCAP",
	I1008 14:43:46.370099  118459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 14:43:46.370108  118459 command_runner.go:130] > # 	"KILL",
	I1008 14:43:46.370113  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370127  118459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 14:43:46.370140  118459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 14:43:46.370152  118459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 14:43:46.370164  118459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 14:43:46.370173  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370183  118459 command_runner.go:130] > default_sysctls = [
	I1008 14:43:46.370193  118459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 14:43:46.370198  118459 command_runner.go:130] > ]
	I1008 14:43:46.370209  118459 command_runner.go:130] > # List of devices on the host that a
	I1008 14:43:46.370249  118459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 14:43:46.370259  118459 command_runner.go:130] > # allowed_devices = [
	I1008 14:43:46.370266  118459 command_runner.go:130] > # 	"/dev/fuse",
	I1008 14:43:46.370270  118459 command_runner.go:130] > # 	"/dev/net/tun",
	I1008 14:43:46.370277  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370285  118459 command_runner.go:130] > # List of additional devices. specified as
	I1008 14:43:46.370300  118459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 14:43:46.370312  118459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 14:43:46.370324  118459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 14:43:46.370333  118459 command_runner.go:130] > # additional_devices = [
	I1008 14:43:46.370341  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370351  118459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 14:43:46.370360  118459 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 14:43:46.370366  118459 command_runner.go:130] > # 	"/etc/cdi",
	I1008 14:43:46.370370  118459 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 14:43:46.370378  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370387  118459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 14:43:46.370400  118459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 14:43:46.370411  118459 command_runner.go:130] > # Defaults to false.
	I1008 14:43:46.370422  118459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 14:43:46.370434  118459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 14:43:46.370462  118459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 14:43:46.370470  118459 command_runner.go:130] > # hooks_dir = [
	I1008 14:43:46.370481  118459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 14:43:46.370491  118459 command_runner.go:130] > # ]
	I1008 14:43:46.370503  118459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 14:43:46.370515  118459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 14:43:46.370526  118459 command_runner.go:130] > # its default mounts from the following two files:
	I1008 14:43:46.370532  118459 command_runner.go:130] > #
	I1008 14:43:46.370538  118459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 14:43:46.370550  118459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 14:43:46.370562  118459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 14:43:46.370571  118459 command_runner.go:130] > #
	I1008 14:43:46.370580  118459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 14:43:46.370593  118459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 14:43:46.370605  118459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 14:43:46.370615  118459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 14:43:46.370623  118459 command_runner.go:130] > #
	I1008 14:43:46.370629  118459 command_runner.go:130] > # default_mounts_file = ""
	I1008 14:43:46.370637  118459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 14:43:46.370647  118459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 14:43:46.370657  118459 command_runner.go:130] > # pids_limit = -1
	I1008 14:43:46.370667  118459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 14:43:46.370679  118459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 14:43:46.370693  118459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 14:43:46.370708  118459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 14:43:46.370717  118459 command_runner.go:130] > # log_size_max = -1
	I1008 14:43:46.370728  118459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 14:43:46.370735  118459 command_runner.go:130] > # log_to_journald = false
	I1008 14:43:46.370743  118459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 14:43:46.370755  118459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 14:43:46.370763  118459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 14:43:46.370774  118459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 14:43:46.370785  118459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 14:43:46.370794  118459 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 14:43:46.370804  118459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 14:43:46.370819  118459 command_runner.go:130] > # read_only = false
	I1008 14:43:46.370828  118459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 14:43:46.370841  118459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 14:43:46.370850  118459 command_runner.go:130] > # live configuration reload.
	I1008 14:43:46.370856  118459 command_runner.go:130] > # log_level = "info"
	I1008 14:43:46.370868  118459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 14:43:46.370884  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.370893  118459 command_runner.go:130] > # log_filter = ""
	I1008 14:43:46.370905  118459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370917  118459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 14:43:46.370923  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370934  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.370943  118459 command_runner.go:130] > # uid_mappings = ""
	I1008 14:43:46.370955  118459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 14:43:46.370967  118459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 14:43:46.370979  118459 command_runner.go:130] > # separated by comma.
	I1008 14:43:46.370994  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371003  118459 command_runner.go:130] > # gid_mappings = ""
	I1008 14:43:46.371012  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 14:43:46.371023  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371037  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371055  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371064  118459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 14:43:46.371076  118459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 14:43:46.371087  118459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 14:43:46.371100  118459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 14:43:46.371112  118459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 14:43:46.371122  118459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 14:43:46.371134  118459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 14:43:46.371146  118459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 14:43:46.371158  118459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 14:43:46.371168  118459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 14:43:46.371179  118459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 14:43:46.371188  118459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 14:43:46.371193  118459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 14:43:46.371204  118459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 14:43:46.371214  118459 command_runner.go:130] > # drop_infra_ctr = true
	I1008 14:43:46.371224  118459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 14:43:46.371235  118459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 14:43:46.371249  118459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 14:43:46.371258  118459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 14:43:46.371276  118459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 14:43:46.371285  118459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 14:43:46.371294  118459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 14:43:46.371306  118459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 14:43:46.371316  118459 command_runner.go:130] > # shared_cpuset = ""
	I1008 14:43:46.371326  118459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 14:43:46.371337  118459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 14:43:46.371346  118459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 14:43:46.371358  118459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 14:43:46.371366  118459 command_runner.go:130] > # pinns_path = ""
	I1008 14:43:46.371374  118459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 14:43:46.371385  118459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 14:43:46.371395  118459 command_runner.go:130] > # enable_criu_support = true
	I1008 14:43:46.371405  118459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 14:43:46.371417  118459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 14:43:46.371422  118459 command_runner.go:130] > # enable_pod_events = false
	I1008 14:43:46.371434  118459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 14:43:46.371453  118459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 14:43:46.371465  118459 command_runner.go:130] > # default_runtime = "crun"
	I1008 14:43:46.371473  118459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 14:43:46.371484  118459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 14:43:46.371501  118459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 14:43:46.371511  118459 command_runner.go:130] > # creation as a file is not desired either.
	I1008 14:43:46.371526  118459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 14:43:46.371537  118459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 14:43:46.371545  118459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 14:43:46.371552  118459 command_runner.go:130] > # ]
	I1008 14:43:46.371559  118459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 14:43:46.371568  118459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 14:43:46.371574  118459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 14:43:46.371579  118459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 14:43:46.371584  118459 command_runner.go:130] > #
	I1008 14:43:46.371589  118459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 14:43:46.371595  118459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 14:43:46.371599  118459 command_runner.go:130] > # runtime_type = "oci"
	I1008 14:43:46.371606  118459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 14:43:46.371610  118459 command_runner.go:130] > # inherit_default_runtime = false
	I1008 14:43:46.371621  118459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 14:43:46.371628  118459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 14:43:46.371633  118459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 14:43:46.371639  118459 command_runner.go:130] > # monitor_env = []
	I1008 14:43:46.371643  118459 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 14:43:46.371649  118459 command_runner.go:130] > # allowed_annotations = []
	I1008 14:43:46.371654  118459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 14:43:46.371660  118459 command_runner.go:130] > # no_sync_log = false
	I1008 14:43:46.371664  118459 command_runner.go:130] > # default_annotations = {}
	I1008 14:43:46.371672  118459 command_runner.go:130] > # stream_websockets = false
	I1008 14:43:46.371676  118459 command_runner.go:130] > # seccomp_profile = ""
	I1008 14:43:46.371698  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.371705  118459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 14:43:46.371711  118459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 14:43:46.371719  118459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 14:43:46.371727  118459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 14:43:46.371731  118459 command_runner.go:130] > #   in $PATH.
	I1008 14:43:46.371736  118459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 14:43:46.371743  118459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 14:43:46.371748  118459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 14:43:46.371753  118459 command_runner.go:130] > #   state.
	I1008 14:43:46.371759  118459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 14:43:46.371767  118459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 14:43:46.371772  118459 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1008 14:43:46.371780  118459 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1008 14:43:46.371785  118459 command_runner.go:130] > #   the values from the default runtime on load time.
	I1008 14:43:46.371793  118459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 14:43:46.371801  118459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 14:43:46.371819  118459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 14:43:46.371827  118459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 14:43:46.371832  118459 command_runner.go:130] > #   The currently recognized values are:
	I1008 14:43:46.371840  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 14:43:46.371846  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 14:43:46.371854  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 14:43:46.371859  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 14:43:46.371869  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 14:43:46.371877  118459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 14:43:46.371885  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 14:43:46.371894  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 14:43:46.371900  118459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 14:43:46.371908  118459 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1008 14:43:46.371917  118459 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1008 14:43:46.371926  118459 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1008 14:43:46.371937  118459 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1008 14:43:46.371943  118459 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1008 14:43:46.371951  118459 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1008 14:43:46.371958  118459 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1008 14:43:46.371966  118459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 14:43:46.371973  118459 command_runner.go:130] > #   deprecated option "conmon".
	I1008 14:43:46.371980  118459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 14:43:46.371987  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 14:43:46.371993  118459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 14:43:46.372000  118459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 14:43:46.372006  118459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1008 14:43:46.372013  118459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 14:43:46.372019  118459 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1008 14:43:46.372025  118459 command_runner.go:130] > #   conmon-rs by using:
	I1008 14:43:46.372032  118459 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1008 14:43:46.372041  118459 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1008 14:43:46.372050  118459 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1008 14:43:46.372060  118459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 14:43:46.372067  118459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 14:43:46.372073  118459 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1008 14:43:46.372083  118459 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1008 14:43:46.372090  118459 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1008 14:43:46.372097  118459 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1008 14:43:46.372107  118459 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1008 14:43:46.372116  118459 command_runner.go:130] > #   when a machine crash happens.
	I1008 14:43:46.372125  118459 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1008 14:43:46.372132  118459 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1008 14:43:46.372139  118459 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1008 14:43:46.372145  118459 command_runner.go:130] > #   seccomp profile for the runtime.
	I1008 14:43:46.372151  118459 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1008 14:43:46.372160  118459 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1008 14:43:46.372165  118459 command_runner.go:130] > #
	I1008 14:43:46.372170  118459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 14:43:46.372175  118459 command_runner.go:130] > #
	I1008 14:43:46.372181  118459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 14:43:46.372187  118459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 14:43:46.372192  118459 command_runner.go:130] > #
	I1008 14:43:46.372198  118459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 14:43:46.372205  118459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 14:43:46.372208  118459 command_runner.go:130] > #
	I1008 14:43:46.372214  118459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 14:43:46.372219  118459 command_runner.go:130] > # feature.
	I1008 14:43:46.372222  118459 command_runner.go:130] > #
	I1008 14:43:46.372228  118459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 14:43:46.372235  118459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 14:43:46.372242  118459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 14:43:46.372251  118459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 14:43:46.372259  118459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 14:43:46.372261  118459 command_runner.go:130] > #
	I1008 14:43:46.372267  118459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 14:43:46.372275  118459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 14:43:46.372281  118459 command_runner.go:130] > #
	I1008 14:43:46.372286  118459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 14:43:46.372294  118459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 14:43:46.372297  118459 command_runner.go:130] > #
	I1008 14:43:46.372302  118459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 14:43:46.372310  118459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 14:43:46.372314  118459 command_runner.go:130] > # limitation.
	I1008 14:43:46.372320  118459 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1008 14:43:46.372325  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1008 14:43:46.372330  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372334  118459 command_runner.go:130] > runtime_root = "/run/crun"
	I1008 14:43:46.372343  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372349  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372353  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372358  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372363  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372367  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372374  118459 command_runner.go:130] > allowed_annotations = [
	I1008 14:43:46.372380  118459 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1008 14:43:46.372384  118459 command_runner.go:130] > ]
	I1008 14:43:46.372391  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372395  118459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 14:43:46.372402  118459 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1008 14:43:46.372406  118459 command_runner.go:130] > runtime_type = ""
	I1008 14:43:46.372411  118459 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 14:43:46.372415  118459 command_runner.go:130] > inherit_default_runtime = false
	I1008 14:43:46.372422  118459 command_runner.go:130] > runtime_config_path = ""
	I1008 14:43:46.372425  118459 command_runner.go:130] > container_min_memory = ""
	I1008 14:43:46.372432  118459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 14:43:46.372436  118459 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 14:43:46.372453  118459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 14:43:46.372461  118459 command_runner.go:130] > privileged_without_host_devices = false
	I1008 14:43:46.372473  118459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 14:43:46.372482  118459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 14:43:46.372491  118459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 14:43:46.372498  118459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 14:43:46.372509  118459 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1008 14:43:46.372520  118459 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1008 14:43:46.372530  118459 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1008 14:43:46.372537  118459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 14:43:46.372545  118459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 14:43:46.372555  118459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 14:43:46.372562  118459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 14:43:46.372569  118459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 14:43:46.372574  118459 command_runner.go:130] > # Example:
	I1008 14:43:46.372578  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 14:43:46.372585  118459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 14:43:46.372591  118459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 14:43:46.372602  118459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 14:43:46.372608  118459 command_runner.go:130] > # cpuset = "0-1"
	I1008 14:43:46.372612  118459 command_runner.go:130] > # cpushares = "5"
	I1008 14:43:46.372617  118459 command_runner.go:130] > # cpuquota = "1000"
	I1008 14:43:46.372621  118459 command_runner.go:130] > # cpuperiod = "100000"
	I1008 14:43:46.372626  118459 command_runner.go:130] > # cpulimit = "35"
	I1008 14:43:46.372630  118459 command_runner.go:130] > # Where:
	I1008 14:43:46.372634  118459 command_runner.go:130] > # The workload name is workload-type.
	I1008 14:43:46.372643  118459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 14:43:46.372650  118459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 14:43:46.372655  118459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 14:43:46.372665  118459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 14:43:46.372682  118459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 14:43:46.372689  118459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 14:43:46.372695  118459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 14:43:46.372701  118459 command_runner.go:130] > # Default value is set to true
	I1008 14:43:46.372706  118459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 14:43:46.372713  118459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 14:43:46.372717  118459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 14:43:46.372724  118459 command_runner.go:130] > # Default value is set to 'false'
	I1008 14:43:46.372728  118459 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 14:43:46.372735  118459 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1008 14:43:46.372743  118459 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1008 14:43:46.372748  118459 command_runner.go:130] > # timezone = ""
	I1008 14:43:46.372756  118459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 14:43:46.372761  118459 command_runner.go:130] > #
	I1008 14:43:46.372767  118459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 14:43:46.372775  118459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1008 14:43:46.372781  118459 command_runner.go:130] > [crio.image]
	I1008 14:43:46.372786  118459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 14:43:46.372792  118459 command_runner.go:130] > # default_transport = "docker://"
	I1008 14:43:46.372798  118459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 14:43:46.372822  118459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372828  118459 command_runner.go:130] > # global_auth_file = ""
	I1008 14:43:46.372833  118459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 14:43:46.372840  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372844  118459 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1008 14:43:46.372853  118459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 14:43:46.372861  118459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 14:43:46.372871  118459 command_runner.go:130] > # This option supports live configuration reload.
	I1008 14:43:46.372877  118459 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 14:43:46.372883  118459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 14:43:46.372888  118459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 14:43:46.372896  118459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 14:43:46.372902  118459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 14:43:46.372908  118459 command_runner.go:130] > # pause_command = "/pause"
	I1008 14:43:46.372914  118459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 14:43:46.372922  118459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 14:43:46.372927  118459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 14:43:46.372935  118459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 14:43:46.372940  118459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 14:43:46.372948  118459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 14:43:46.372952  118459 command_runner.go:130] > # pinned_images = [
	I1008 14:43:46.372958  118459 command_runner.go:130] > # ]
	I1008 14:43:46.372963  118459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 14:43:46.372972  118459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 14:43:46.372978  118459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 14:43:46.372986  118459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 14:43:46.372991  118459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 14:43:46.372997  118459 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1008 14:43:46.373003  118459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 14:43:46.373012  118459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 14:43:46.373021  118459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 14:43:46.373029  118459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 14:43:46.373034  118459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 14:43:46.373042  118459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 14:43:46.373051  118459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 14:43:46.373058  118459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 14:43:46.373065  118459 command_runner.go:130] > # changing them here.
	I1008 14:43:46.373070  118459 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1008 14:43:46.373076  118459 command_runner.go:130] > # insecure_registries = [
	I1008 14:43:46.373079  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373087  118459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 14:43:46.373095  118459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 14:43:46.373104  118459 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 14:43:46.373112  118459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 14:43:46.373116  118459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 14:43:46.373124  118459 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1008 14:43:46.373130  118459 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1008 14:43:46.373134  118459 command_runner.go:130] > # auto_reload_registries = false
	I1008 14:43:46.373142  118459 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1008 14:43:46.373149  118459 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1008 14:43:46.373157  118459 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1008 14:43:46.373162  118459 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1008 14:43:46.373168  118459 command_runner.go:130] > # The mode of short name resolution.
	I1008 14:43:46.373174  118459 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1008 14:43:46.373183  118459 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1008 14:43:46.373190  118459 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1008 14:43:46.373195  118459 command_runner.go:130] > # short_name_mode = "enforcing"
	I1008 14:43:46.373204  118459 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1008 14:43:46.373212  118459 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1008 14:43:46.373216  118459 command_runner.go:130] > # oci_artifact_mount_support = true
	I1008 14:43:46.373224  118459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 14:43:46.373228  118459 command_runner.go:130] > # CNI plugins.
	I1008 14:43:46.373234  118459 command_runner.go:130] > [crio.network]
	I1008 14:43:46.373239  118459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 14:43:46.373246  118459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 14:43:46.373251  118459 command_runner.go:130] > # cni_default_network = ""
	I1008 14:43:46.373259  118459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 14:43:46.373266  118459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 14:43:46.373271  118459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 14:43:46.373277  118459 command_runner.go:130] > # plugin_dirs = [
	I1008 14:43:46.373280  118459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 14:43:46.373284  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373289  118459 command_runner.go:130] > # List of included pod metrics.
	I1008 14:43:46.373295  118459 command_runner.go:130] > # included_pod_metrics = [
	I1008 14:43:46.373297  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373304  118459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 14:43:46.373310  118459 command_runner.go:130] > [crio.metrics]
	I1008 14:43:46.373314  118459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 14:43:46.373320  118459 command_runner.go:130] > # enable_metrics = false
	I1008 14:43:46.373324  118459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 14:43:46.373331  118459 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 14:43:46.373337  118459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 14:43:46.373347  118459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 14:43:46.373355  118459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 14:43:46.373359  118459 command_runner.go:130] > # metrics_collectors = [
	I1008 14:43:46.373364  118459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 14:43:46.373368  118459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 14:43:46.373371  118459 command_runner.go:130] > # 	"containers_oom_total",
	I1008 14:43:46.373374  118459 command_runner.go:130] > # 	"processes_defunct",
	I1008 14:43:46.373378  118459 command_runner.go:130] > # 	"operations_total",
	I1008 14:43:46.373381  118459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 14:43:46.373386  118459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 14:43:46.373389  118459 command_runner.go:130] > # 	"operations_errors_total",
	I1008 14:43:46.373393  118459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 14:43:46.373397  118459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 14:43:46.373400  118459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 14:43:46.373408  118459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 14:43:46.373411  118459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 14:43:46.373415  118459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 14:43:46.373422  118459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 14:43:46.373426  118459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 14:43:46.373430  118459 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1008 14:43:46.373436  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373450  118459 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1008 14:43:46.373460  118459 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1008 14:43:46.373468  118459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 14:43:46.373475  118459 command_runner.go:130] > # metrics_port = 9090
	I1008 14:43:46.373480  118459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 14:43:46.373486  118459 command_runner.go:130] > # metrics_socket = ""
	I1008 14:43:46.373490  118459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 14:43:46.373499  118459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 14:43:46.373508  118459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 14:43:46.373514  118459 command_runner.go:130] > # certificate on any modification event.
	I1008 14:43:46.373518  118459 command_runner.go:130] > # metrics_cert = ""
	I1008 14:43:46.373525  118459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 14:43:46.373530  118459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 14:43:46.373536  118459 command_runner.go:130] > # metrics_key = ""
	I1008 14:43:46.373542  118459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 14:43:46.373548  118459 command_runner.go:130] > [crio.tracing]
	I1008 14:43:46.373554  118459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 14:43:46.373564  118459 command_runner.go:130] > # enable_tracing = false
	I1008 14:43:46.373571  118459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 14:43:46.373576  118459 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1008 14:43:46.373584  118459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 14:43:46.373591  118459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 14:43:46.373598  118459 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 14:43:46.373604  118459 command_runner.go:130] > [crio.nri]
	I1008 14:43:46.373608  118459 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 14:43:46.373614  118459 command_runner.go:130] > # enable_nri = true
	I1008 14:43:46.373618  118459 command_runner.go:130] > # NRI socket to listen on.
	I1008 14:43:46.373624  118459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 14:43:46.373628  118459 command_runner.go:130] > # NRI plugin directory to use.
	I1008 14:43:46.373635  118459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 14:43:46.373640  118459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 14:43:46.373647  118459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 14:43:46.373653  118459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 14:43:46.373688  118459 command_runner.go:130] > # nri_disable_connections = false
	I1008 14:43:46.373696  118459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 14:43:46.373701  118459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 14:43:46.373705  118459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 14:43:46.373712  118459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 14:43:46.373717  118459 command_runner.go:130] > # NRI default validator configuration.
	I1008 14:43:46.373725  118459 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1008 14:43:46.373733  118459 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1008 14:43:46.373737  118459 command_runner.go:130] > # can be restricted/rejected:
	I1008 14:43:46.373743  118459 command_runner.go:130] > # - OCI hook injection
	I1008 14:43:46.373748  118459 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1008 14:43:46.373755  118459 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1008 14:43:46.373760  118459 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1008 14:43:46.373766  118459 command_runner.go:130] > # - adjustment of linux namespaces
	I1008 14:43:46.373772  118459 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1008 14:43:46.373780  118459 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1008 14:43:46.373788  118459 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1008 14:43:46.373791  118459 command_runner.go:130] > #
	I1008 14:43:46.373795  118459 command_runner.go:130] > # [crio.nri.default_validator]
	I1008 14:43:46.373802  118459 command_runner.go:130] > # nri_enable_default_validator = false
	I1008 14:43:46.373811  118459 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1008 14:43:46.373819  118459 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1008 14:43:46.373827  118459 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1008 14:43:46.373832  118459 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1008 14:43:46.373839  118459 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1008 14:43:46.373843  118459 command_runner.go:130] > # nri_validator_required_plugins = [
	I1008 14:43:46.373848  118459 command_runner.go:130] > # ]
	I1008 14:43:46.373853  118459 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1008 14:43:46.373861  118459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 14:43:46.373865  118459 command_runner.go:130] > [crio.stats]
	I1008 14:43:46.373873  118459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 14:43:46.373880  118459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 14:43:46.373887  118459 command_runner.go:130] > # stats_collection_period = 0
	I1008 14:43:46.373892  118459 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1008 14:43:46.373900  118459 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1008 14:43:46.373907  118459 command_runner.go:130] > # collection_period = 0
	I1008 14:43:46.373928  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353034685Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1008 14:43:46.373938  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353062648Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1008 14:43:46.373948  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.35308236Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1008 14:43:46.373956  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353100078Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1008 14:43:46.373967  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353161884Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:43:46.373976  118459 command_runner.go:130] ! time="2025-10-08T14:43:46.353351718Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1008 14:43:46.373988  118459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 14:43:46.374064  118459 cni.go:84] Creating CNI manager for ""
	I1008 14:43:46.374077  118459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:43:46.374093  118459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:43:46.374116  118459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:43:46.374237  118459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:43:46.374300  118459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:43:46.382363  118459 command_runner.go:130] > kubeadm
	I1008 14:43:46.382384  118459 command_runner.go:130] > kubectl
	I1008 14:43:46.382389  118459 command_runner.go:130] > kubelet
	I1008 14:43:46.382411  118459 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:43:46.382482  118459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:43:46.390162  118459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:43:46.403097  118459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:43:46.415613  118459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:43:46.428192  118459 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:43:46.432007  118459 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1008 14:43:46.432080  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:46.522533  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:46.535801  118459 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:43:46.535827  118459 certs.go:195] generating shared ca certs ...
	I1008 14:43:46.535849  118459 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:46.536002  118459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:43:46.536048  118459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:43:46.536069  118459 certs.go:257] generating profile certs ...
	I1008 14:43:46.536190  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:43:46.536242  118459 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:43:46.536277  118459 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:43:46.536291  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 14:43:46.536306  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 14:43:46.536318  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 14:43:46.536330  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 14:43:46.536342  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 14:43:46.536377  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 14:43:46.536393  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 14:43:46.536405  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 14:43:46.536476  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:43:46.536513  118459 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:43:46.536523  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:43:46.536550  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:43:46.536574  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:43:46.536595  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:43:46.536635  118459 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:43:46.536660  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.536675  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.536688  118459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.537241  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:43:46.555642  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:43:46.572819  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:43:46.590661  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:43:46.607931  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:43:46.625383  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:43:46.642336  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:43:46.659419  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:43:46.676486  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:43:46.693083  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:43:46.710326  118459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:43:46.727941  118459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:43:46.740780  118459 ssh_runner.go:195] Run: openssl version
	I1008 14:43:46.747268  118459 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1008 14:43:46.747351  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:43:46.756220  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760077  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760121  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.760189  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:43:46.794493  118459 command_runner.go:130] > 3ec20f2e
	I1008 14:43:46.794726  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:43:46.803126  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:43:46.811855  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815648  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815718  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.815789  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:43:46.849403  118459 command_runner.go:130] > b5213941
	I1008 14:43:46.849676  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:43:46.857958  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:43:46.866212  118459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869736  118459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869766  118459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.869798  118459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:43:46.904128  118459 command_runner.go:130] > 51391683
	I1008 14:43:46.904402  118459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:43:46.913326  118459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917356  118459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:43:46.917385  118459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 14:43:46.917396  118459 command_runner.go:130] > Device: 8,1	Inode: 591874      Links: 1
	I1008 14:43:46.917405  118459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 14:43:46.917413  118459 command_runner.go:130] > Access: 2025-10-08 14:39:39.676864991 +0000
	I1008 14:43:46.917418  118459 command_runner.go:130] > Modify: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917426  118459 command_runner.go:130] > Change: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917431  118459 command_runner.go:130] >  Birth: 2025-10-08 14:35:35.375767545 +0000
	I1008 14:43:46.917505  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:43:46.951955  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.952157  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:43:46.986574  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:46.986789  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:43:47.021180  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.021253  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:43:47.054995  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.055238  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:43:47.088666  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.089049  118459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:43:47.123893  118459 command_runner.go:130] > Certificate will not expire
	I1008 14:43:47.124156  118459 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:43:47.124254  118459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:43:47.124313  118459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:43:47.152244  118459 cri.go:89] found id: ""
	I1008 14:43:47.152307  118459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:43:47.160274  118459 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1008 14:43:47.160294  118459 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1008 14:43:47.160299  118459 command_runner.go:130] > /var/lib/minikube/etcd:
	I1008 14:43:47.160318  118459 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:43:47.160325  118459 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:43:47.160370  118459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:43:47.167663  118459 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:43:47.167758  118459 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-367186" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.167803  118459 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "functional-367186" cluster setting kubeconfig missing "functional-367186" context setting]
	I1008 14:43:47.168217  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.169051  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.169269  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.170001  118459 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 14:43:47.170034  118459 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 14:43:47.170046  118459 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 14:43:47.170052  118459 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 14:43:47.170058  118459 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 14:43:47.170055  118459 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 14:43:47.170535  118459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:43:47.177804  118459 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 14:43:47.177829  118459 kubeadm.go:601] duration metric: took 17.498385ms to restartPrimaryControlPlane
	I1008 14:43:47.177836  118459 kubeadm.go:402] duration metric: took 53.689897ms to StartCluster
	I1008 14:43:47.177851  118459 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.177960  118459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.178692  118459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:43:47.178964  118459 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:43:47.179000  118459 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 14:43:47.179182  118459 addons.go:69] Setting storage-provisioner=true in profile "functional-367186"
	I1008 14:43:47.179161  118459 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:43:47.179199  118459 addons.go:238] Setting addon storage-provisioner=true in "functional-367186"
	I1008 14:43:47.179280  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.179202  118459 addons.go:69] Setting default-storageclass=true in profile "functional-367186"
	I1008 14:43:47.179355  118459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-367186"
	I1008 14:43:47.179643  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.179723  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.181696  118459 out.go:179] * Verifying Kubernetes components...
	I1008 14:43:47.182986  118459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:43:47.197887  118459 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:43:47.198131  118459 kapi.go:59] client config for functional-367186: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 14:43:47.198516  118459 addons.go:238] Setting addon default-storageclass=true in "functional-367186"
	I1008 14:43:47.198560  118459 host.go:66] Checking if "functional-367186" exists ...
	I1008 14:43:47.198956  118459 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:43:47.199610  118459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:43:47.201208  118459 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.201228  118459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:43:47.201280  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.224257  118459 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.224285  118459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:43:47.224346  118459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:43:47.226258  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.244099  118459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:43:47.285014  118459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:43:47.298345  118459 node_ready.go:35] waiting up to 6m0s for node "functional-367186" to be "Ready" ...
	I1008 14:43:47.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.298934  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:47.336898  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.352323  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.393808  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.393854  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.393886  118459 retry.go:31] will retry after 231.755958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407397  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.407475  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.407496  118459 retry.go:31] will retry after 329.539024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.626786  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:47.679746  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.679800  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.679850  118459 retry.go:31] will retry after 393.16896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.738034  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:47.790656  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:47.792936  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.792970  118459 retry.go:31] will retry after 318.025551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:47.799129  118459 type.go:168] "Request Body" body=""
	I1008 14:43:47.799197  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:47.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.073934  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.111484  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.127850  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.127921  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.127943  118459 retry.go:31] will retry after 836.309595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.162277  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:48.164855  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.164886  118459 retry.go:31] will retry after 780.910281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:48.299211  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.299650  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.799557  118459 type.go:168] "Request Body" body=""
	I1008 14:43:48.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:48.799964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:48.946262  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:48.964996  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:48.998239  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.000519  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.000554  118459 retry.go:31] will retry after 896.283262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.018974  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.019036  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.019061  118459 retry.go:31] will retry after 1.078166751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.299460  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.299536  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.299868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:49.299950  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:49.799616  118459 type.go:168] "Request Body" body=""
	I1008 14:43:49.799720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:49.800392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:49.897595  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:49.950387  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:49.950427  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:49.950463  118459 retry.go:31] will retry after 1.484279714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.097767  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:50.149377  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:50.149421  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.149465  118459 retry.go:31] will retry after 1.600335715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:50.298625  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:50.798695  118459 type.go:168] "Request Body" body=""
	I1008 14:43:50.798808  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:50.799174  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.298904  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:51.435639  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:51.489347  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.491876  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.491909  118459 retry.go:31] will retry after 2.200481753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.750291  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:51.799001  118459 type.go:168] "Request Body" body=""
	I1008 14:43:51.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:51.799398  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:51.799489  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:51.803486  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:51.803590  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:51.803616  118459 retry.go:31] will retry after 2.262800355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:52.299098  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.299177  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.299542  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:52.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:43:52.799399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:52.799764  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.298621  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.299048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:53.692777  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:53.745144  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:53.745204  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.745229  118459 retry.go:31] will retry after 3.527117876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:53.799392  118459 type.go:168] "Request Body" body=""
	I1008 14:43:53.799480  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:53.799857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:53.799918  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:54.067271  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:54.118417  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:54.118478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.118503  118459 retry.go:31] will retry after 3.862999365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:54.298755  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.298838  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.299219  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:54.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:43:54.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:54.799074  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.298863  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.298942  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.299253  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:55.798989  118459 type.go:168] "Request Body" body=""
	I1008 14:43:55.799066  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:55.799421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:56.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:56.299793  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:56.799548  118459 type.go:168] "Request Body" body=""
	I1008 14:43:56.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:56.799947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.272978  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:43:57.298541  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.298620  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.298918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.321958  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:57.324558  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.324587  118459 retry.go:31] will retry after 4.383767223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:57.799184  118459 type.go:168] "Request Body" body=""
	I1008 14:43:57.799301  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:57.799689  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:57.982062  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:43:58.032702  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:43:58.035195  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.035237  118459 retry.go:31] will retry after 5.903970239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:43:58.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:58.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:43:58.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:58.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:43:58.799473  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:43:59.298999  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.299078  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.299479  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:43:59.799062  118459 type.go:168] "Request Body" body=""
	I1008 14:43:59.799145  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:43:59.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.299550  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:00.799200  118459 type.go:168] "Request Body" body=""
	I1008 14:44:00.799275  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:00.799625  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:00.799685  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:01.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.299385  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.299774  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:01.709356  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:01.759088  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:01.761882  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.761921  118459 retry.go:31] will retry after 6.257319935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:01.799124  118459 type.go:168] "Request Body" body=""
	I1008 14:44:01.799237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:01.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.299268  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.299716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:02.799390  118459 type.go:168] "Request Body" body=""
	I1008 14:44:02.799502  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:02.799880  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:02.799960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:03.299492  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.299563  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.299925  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.798665  118459 type.go:168] "Request Body" body=""
	I1008 14:44:03.798754  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:03.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:03.940379  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:03.990275  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:03.993084  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:03.993122  118459 retry.go:31] will retry after 4.028920288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:04.298653  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.299341  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:04.798956  118459 type.go:168] "Request Body" body=""
	I1008 14:44:04.799033  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:04.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:05.299051  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.299176  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.299598  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:05.299657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:05.799285  118459 type.go:168] "Request Body" body=""
	I1008 14:44:05.799356  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:05.799725  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.299393  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.299841  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:06.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:44:06.799593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:06.799944  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.299053  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:07.798714  118459 type.go:168] "Request Body" body=""
	I1008 14:44:07.798786  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:07.799261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:07.799325  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:08.019559  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:08.023109  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:08.072023  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.074947  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074963  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.074982  118459 retry.go:31] will retry after 6.922745297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:08.076401  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.076428  118459 retry.go:31] will retry after 5.441570095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:08.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.298802  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.299153  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:08.799104  118459 type.go:168] "Request Body" body=""
	I1008 14:44:08.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:08.799539  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.299229  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.299310  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.299686  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:09.799379  118459 type.go:168] "Request Body" body=""
	I1008 14:44:09.799472  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:09.799807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:09.799869  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:10.299531  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.299603  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.299958  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:10.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:44:10.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:10.799011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.298647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.299123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:11.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:44:11.798895  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:11.799225  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:12.298842  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.298915  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:12.299310  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:12.798893  118459 type.go:168] "Request Body" body=""
	I1008 14:44:12.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:12.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.299008  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:13.518328  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:13.572977  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:13.573020  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.573038  118459 retry.go:31] will retry after 15.052611026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:13.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:44:13.798632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:13.798973  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.298816  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.298894  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.299223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:14.798866  118459 type.go:168] "Request Body" body=""
	I1008 14:44:14.798962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:14.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:14.799351  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:14.998673  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:15.051035  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:15.051092  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.051116  118459 retry.go:31] will retry after 7.550335313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:15.299491  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.299568  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:15.799546  118459 type.go:168] "Request Body" body=""
	I1008 14:44:15.799646  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:15.800035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.298586  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.299006  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:16.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:44:16.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:16.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:17.298969  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.299043  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:17.299467  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:17.798964  118459 type.go:168] "Request Body" body=""
	I1008 14:44:17.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:17.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.299415  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:18.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:44:18.799349  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:18.799698  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:19.299431  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.299558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.299972  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:19.300047  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:19.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:44:19.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:19.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.299042  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:20.798619  118459 type.go:168] "Request Body" body=""
	I1008 14:44:20.798691  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:20.798998  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.298572  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.298698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.299121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:21.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:44:21.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:21.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:21.799149  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:22.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:22.602557  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:22.653552  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:22.656108  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.656138  118459 retry.go:31] will retry after 31.201355729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:22.799459  118459 type.go:168] "Request Body" body=""
	I1008 14:44:22.799558  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:22.799901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.299026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:23.798988  118459 type.go:168] "Request Body" body=""
	I1008 14:44:23.799061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:23.799476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:23.799539  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:24.299048  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.299131  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.299558  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:24.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:44:24.799285  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:24.799622  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.299437  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.299594  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.299994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:25.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:44:25.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:25.799056  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:26.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.298737  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.299066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:26.299138  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:26.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:44:26.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:26.799032  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.298934  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.299032  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:27.798977  118459 type.go:168] "Request Body" body=""
	I1008 14:44:27.799057  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:27.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:28.298998  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.299130  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.299524  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:28.299599  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:28.625918  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:28.675593  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:28.678080  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.678122  118459 retry.go:31] will retry after 23.952219527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:28.799477  118459 type.go:168] "Request Body" body=""
	I1008 14:44:28.799570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:28.799970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.298589  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.298685  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:29.798713  118459 type.go:168] "Request Body" body=""
	I1008 14:44:29.798787  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:29.799221  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.298792  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.299229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:30.798891  118459 type.go:168] "Request Body" body=""
	I1008 14:44:30.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:30.799335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:30.799398  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:31.298936  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.299373  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:31.798930  118459 type.go:168] "Request Body" body=""
	I1008 14:44:31.799039  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:31.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.299072  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:32.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:44:32.799097  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:32.799529  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:32.799596  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:33.299230  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.299325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.299740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:33.798515  118459 type.go:168] "Request Body" body=""
	I1008 14:44:33.798587  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:33.798936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.299656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:34.798590  118459 type.go:168] "Request Body" body=""
	I1008 14:44:34.798664  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:34.799020  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:35.298588  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.298666  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.299052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:35.299143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:35.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:44:35.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:35.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.299007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:36.798626  118459 type.go:168] "Request Body" body=""
	I1008 14:44:36.798702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:36.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:37.298948  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.299051  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:37.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:37.799006  118459 type.go:168] "Request Body" body=""
	I1008 14:44:37.799086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:37.799417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.299020  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.299100  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.299469  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:38.799369  118459 type.go:168] "Request Body" body=""
	I1008 14:44:38.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:38.799927  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:39.299580  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.299693  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.300082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:39.300150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:39.798611  118459 type.go:168] "Request Body" body=""
	I1008 14:44:39.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:39.799046  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.298592  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.298670  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:40.798637  118459 type.go:168] "Request Body" body=""
	I1008 14:44:40.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:40.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.299138  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:41.798729  118459 type.go:168] "Request Body" body=""
	I1008 14:44:41.798815  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:41.799152  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:41.799215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:42.298723  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.298799  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.299170  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:42.798731  118459 type.go:168] "Request Body" body=""
	I1008 14:44:42.798836  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:42.799203  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.298908  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.299278  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:43.799167  118459 type.go:168] "Request Body" body=""
	I1008 14:44:43.799250  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:43.799597  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:43.799661  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:44.299314  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.299416  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.299827  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:44.799577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:44.799657  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:44.800048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.298599  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.298673  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.299047  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:45.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:44:45.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:45.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:46.298671  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.298751  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.299126  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:46.299191  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:46.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:44:46.798850  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:46.799223  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.299119  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.299231  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.299611  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:47.799336  118459 type.go:168] "Request Body" body=""
	I1008 14:44:47.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:47.799765  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:48.299501  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.299582  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.299947  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:48.300006  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:48.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:44:48.798729  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:48.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.298752  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:49.798901  118459 type.go:168] "Request Body" body=""
	I1008 14:44:49.798982  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:49.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.298921  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.299003  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:50.798955  118459 type.go:168] "Request Body" body=""
	I1008 14:44:50.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:50.799416  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:50.799534  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:51.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.299214  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.299601  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:51.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:44:51.799388  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:51.799753  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.299413  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.299503  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.299839  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:52.631482  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:44:52.682310  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:52.684872  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.684901  118459 retry.go:31] will retry after 32.790446037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:52.799279  118459 type.go:168] "Request Body" body=""
	I1008 14:44:52.799368  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:52.799719  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:52.799778  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:53.299429  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.299517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.299873  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:53.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:53.799081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:53.858347  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:44:53.912029  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:44:53.912083  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:53.912107  118459 retry.go:31] will retry after 18.370397631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:44:54.298601  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:54.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:54.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:54.799095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:55.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.299226  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:55.299302  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:55.798903  118459 type.go:168] "Request Body" body=""
	I1008 14:44:55.798996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:55.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.298927  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.299347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:56.798648  118459 type.go:168] "Request Body" body=""
	I1008 14:44:56.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:56.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:57.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.299509  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:57.299581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:44:57.799169  118459 type.go:168] "Request Body" body=""
	I1008 14:44:57.799283  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:57.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.299318  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.299391  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.299772  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:58.799563  118459 type.go:168] "Request Body" body=""
	I1008 14:44:58.799658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:58.800017  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.298677  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.299050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:44:59.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:44:59.798757  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:44:59.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:44:59.799217  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:00.298721  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.298821  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:00.798884  118459 type.go:168] "Request Body" body=""
	I1008 14:45:00.798972  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:00.799337  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.298871  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.298949  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.299314  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:01.798878  118459 type.go:168] "Request Body" body=""
	I1008 14:45:01.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:01.799285  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:01.799345  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:02.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.299353  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:02.798928  118459 type.go:168] "Request Body" body=""
	I1008 14:45:02.799012  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:02.799359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.298939  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.299014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.299359  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:03.799249  118459 type.go:168] "Request Body" body=""
	I1008 14:45:03.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:03.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:03.799744  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:04.299367  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.299468  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.299800  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:04.799513  118459 type.go:168] "Request Body" body=""
	I1008 14:45:04.799614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:04.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:05.798722  118459 type.go:168] "Request Body" body=""
	I1008 14:45:05.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:05.799201  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:06.298786  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.298890  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.299232  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:06.299292  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:06.798807  118459 type.go:168] "Request Body" body=""
	I1008 14:45:06.798900  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:06.799230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.299263  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.299613  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:07.799343  118459 type.go:168] "Request Body" body=""
	I1008 14:45:07.799420  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:07.799763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:08.299428  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.299527  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.299872  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:08.299937  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:08.798593  118459 type.go:168] "Request Body" body=""
	I1008 14:45:08.798667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:08.799001  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.298582  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:09.798617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:09.798698  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:09.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.298622  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:10.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:10.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:10.799101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:10.799164  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:11.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.298725  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:11.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:45:11.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:11.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.282739  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:45:12.299378  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.299488  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.299877  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:12.333950  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336478  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:12.336622  118459 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:12.799135  118459 type.go:168] "Request Body" body=""
	I1008 14:45:12.799209  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:12.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:12.799657  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:13.299289  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.299709  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:13.798861  118459 type.go:168] "Request Body" body=""
	I1008 14:45:13.798943  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:13.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.298849  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.298932  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.299258  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:14.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:45:14.799040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:14.799406  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:15.299027  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.299098  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:15.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:15.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:45:15.799155  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:15.799530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.299229  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.299576  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:16.799320  118459 type.go:168] "Request Body" body=""
	I1008 14:45:16.799402  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:16.799740  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.298566  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:17.798602  118459 type.go:168] "Request Body" body=""
	I1008 14:45:17.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:17.799015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:17.799082  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:18.298617  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.298700  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:18.798851  118459 type.go:168] "Request Body" body=""
	I1008 14:45:18.798935  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:18.799287  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.298852  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.299298  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:19.798906  118459 type.go:168] "Request Body" body=""
	I1008 14:45:19.798988  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:19.799347  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:19.799406  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:20.298933  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.299005  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.299355  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:20.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:45:20.799025  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:20.799390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.298968  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.299041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.299411  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:21.799011  118459 type.go:168] "Request Body" body=""
	I1008 14:45:21.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:21.799369  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:22.299008  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.299101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.299519  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:22.299580  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:22.799213  118459 type.go:168] "Request Body" body=""
	I1008 14:45:22.799289  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:22.799634  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.299390  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.299767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:23.799544  118459 type.go:168] "Request Body" body=""
	I1008 14:45:23.799617  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:23.799951  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.298561  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.298641  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.298990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:24.798607  118459 type.go:168] "Request Body" body=""
	I1008 14:45:24.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:24.799048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:24.799112  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:25.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.298686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:25.476423  118459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:45:25.531081  118459 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531142  118459 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 14:45:25.531259  118459 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:45:25.534376  118459 out.go:179] * Enabled addons: 
	I1008 14:45:25.535655  118459 addons.go:514] duration metric: took 1m38.356657385s for enable addons: enabled=[]
	I1008 14:45:25.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:45:25.798640  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:25.798959  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.298537  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.299011  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:26.798610  118459 type.go:168] "Request Body" body=""
	I1008 14:45:26.798686  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:26.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:26.799185  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:27.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.299111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:27.799210  118459 type.go:168] "Request Body" body=""
	I1008 14:45:27.799306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:27.799715  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.299395  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.299520  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.299905  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:28.798594  118459 type.go:168] "Request Body" body=""
	I1008 14:45:28.798692  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:28.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:29.298630  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.298716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:29.299127  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:29.798717  118459 type.go:168] "Request Body" body=""
	I1008 14:45:29.798816  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:29.799196  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.299218  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:30.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:45:30.798893  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:30.799252  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:31.298834  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.299230  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:31.299294  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:31.798829  118459 type.go:168] "Request Body" body=""
	I1008 14:45:31.798912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:31.799264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.298806  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.298882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.299262  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:32.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:45:32.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:32.799271  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:33.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.298966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.299345  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:33.299417  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:33.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:33.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:33.799654  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.299321  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.299423  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.299763  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:34.799422  118459 type.go:168] "Request Body" body=""
	I1008 14:45:34.799533  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:34.799902  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.298559  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.298639  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.298963  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:35.798592  118459 type.go:168] "Request Body" body=""
	I1008 14:45:35.798679  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:35.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:35.799128  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:36.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.299156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:36.798655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:36.798779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:36.799148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.299530  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:37.799212  118459 type.go:168] "Request Body" body=""
	I1008 14:45:37.799300  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:37.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:37.799713  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:38.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.299405  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.299766  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:38.799558  118459 type.go:168] "Request Body" body=""
	I1008 14:45:38.799667  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:38.800040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.298689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.299038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:39.798644  118459 type.go:168] "Request Body" body=""
	I1008 14:45:39.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:39.799106  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:40.298658  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.299095  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:40.299169  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:40.798657  118459 type.go:168] "Request Body" body=""
	I1008 14:45:40.798736  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:40.799078  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.298629  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.299061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:41.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:45:41.798741  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:41.799102  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:42.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.299168  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:42.299237  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:42.798716  118459 type.go:168] "Request Body" body=""
	I1008 14:45:42.798788  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:42.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.298801  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.298887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:43.799130  118459 type.go:168] "Request Body" body=""
	I1008 14:45:43.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:43.799591  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:44.299252  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.299339  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.299712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:44.299773  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:44.799365  118459 type.go:168] "Request Body" body=""
	I1008 14:45:44.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:44.799825  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.299172  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.299287  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.299676  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:45.798663  118459 type.go:168] "Request Body" body=""
	I1008 14:45:45.798752  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:45.799167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.298781  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.298881  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.299294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:46.798856  118459 type.go:168] "Request Body" body=""
	I1008 14:45:46.798931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:46.799293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:46.799356  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:47.299154  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.299246  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:47.799327  118459 type.go:168] "Request Body" body=""
	I1008 14:45:47.799406  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:47.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.299439  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.299542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.299919  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:48.798628  118459 type.go:168] "Request Body" body=""
	I1008 14:45:48.798704  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:48.799075  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:49.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:49.299162  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:49.798684  118459 type.go:168] "Request Body" body=""
	I1008 14:45:49.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:49.799141  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.298714  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.298795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.299144  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:50.798776  118459 type.go:168] "Request Body" body=""
	I1008 14:45:50.798853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:50.799207  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:51.298712  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.298791  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.299166  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:51.299231  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:51.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:45:51.798829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:51.799189  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.298885  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.299246  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:52.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:45:52.798953  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:52.799319  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.298699  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.298776  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.299137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:53.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:45:53.799143  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:53.799505  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:53.799579  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:54.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.299276  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.299636  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:54.799331  118459 type.go:168] "Request Body" body=""
	I1008 14:45:54.799408  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:54.799784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.299472  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:55.798585  118459 type.go:168] "Request Body" body=""
	I1008 14:45:55.798665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:55.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:56.298627  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.298705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:56.299148  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:56.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:45:56.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:56.799077  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.299046  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.299146  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.299523  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:57.799189  118459 type.go:168] "Request Body" body=""
	I1008 14:45:57.799274  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:57.799642  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:58.299356  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.299473  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.299961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:45:58.300023  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:45:58.798632  118459 type.go:168] "Request Body" body=""
	I1008 14:45:58.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:58.799059  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.298721  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:45:59.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:45:59.798755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:45:59.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:00.798766  118459 type.go:168] "Request Body" body=""
	I1008 14:46:00.798873  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:00.799228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:00.799293  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:01.298587  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.298661  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.299023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:01.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:01.798731  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:01.799123  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.298698  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:02.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:46:02.798801  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:02.799202  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:03.298750  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.298833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:03.299244  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:03.799037  118459 type.go:168] "Request Body" body=""
	I1008 14:46:03.799122  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:03.799491  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.299167  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.299249  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.299630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:04.799329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:04.799414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:04.799795  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:05.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.299567  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.299956  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:05.300019  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:05.798576  118459 type.go:168] "Request Body" body=""
	I1008 14:46:05.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:05.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.298578  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.298688  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:06.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:46:06.798734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:06.799117  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.299024  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.299118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.299493  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:07.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:46:07.799139  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:07.799496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:07.799569  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:08.299035  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.299126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.299518  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:08.799377  118459 type.go:168] "Request Body" body=""
	I1008 14:46:08.799479  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:08.799812  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.298529  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.298607  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.298931  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:09.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:09.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:09.799111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:10.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.298735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.299130  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:10.299230  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:10.798708  118459 type.go:168] "Request Body" body=""
	I1008 14:46:10.798795  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:10.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.298579  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.298650  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.298984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:11.798571  118459 type.go:168] "Request Body" body=""
	I1008 14:46:11.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:11.798994  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.299013  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:12.798609  118459 type.go:168] "Request Body" body=""
	I1008 14:46:12.798689  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:12.799038  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:12.799099  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:13.298602  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:13.798949  118459 type.go:168] "Request Body" body=""
	I1008 14:46:13.799028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:13.799365  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.299036  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.299417  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:14.798995  118459 type.go:168] "Request Body" body=""
	I1008 14:46:14.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:14.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:14.799507  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:15.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.299118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:15.798739  118459 type.go:168] "Request Body" body=""
	I1008 14:46:15.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:15.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.298707  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.299195  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:16.798747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:16.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:16.799211  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:17.299171  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.299252  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.299620  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:17.299687  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:17.799351  118459 type.go:168] "Request Body" body=""
	I1008 14:46:17.799429  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:17.799815  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.299581  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.299663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.300026  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:18.798911  118459 type.go:168] "Request Body" body=""
	I1008 14:46:18.798995  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:18.799361  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.298941  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.299017  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.299380  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:19.798976  118459 type.go:168] "Request Body" body=""
	I1008 14:46:19.799059  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:19.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:19.799484  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:20.298983  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.299063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.299433  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:20.799000  118459 type.go:168] "Request Body" body=""
	I1008 14:46:20.799073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:20.799422  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.299052  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:21.798986  118459 type.go:168] "Request Body" body=""
	I1008 14:46:21.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:21.799475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:21.799540  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:22.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.299073  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.299421  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:22.799016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:22.799089  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:22.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.299012  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.299086  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.299426  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:23.799352  118459 type.go:168] "Request Body" body=""
	I1008 14:46:23.799434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:23.799781  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:23.799842  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:24.299407  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.299843  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:24.799556  118459 type.go:168] "Request Body" body=""
	I1008 14:46:24.799631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:24.799961  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.298635  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.298981  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:25.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:25.798735  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:25.799082  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:26.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.299076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:26.299150  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:26.798664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:26.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:26.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.298937  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.299013  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.299343  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:27.798925  118459 type.go:168] "Request Body" body=""
	I1008 14:46:27.798999  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:27.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:28.298903  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.298998  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.299342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:28.299409  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:28.799216  118459 type.go:168] "Request Body" body=""
	I1008 14:46:28.799293  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:28.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.299329  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.299414  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.299824  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:29.799545  118459 type.go:168] "Request Body" body=""
	I1008 14:46:29.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:29.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.298574  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.298654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.299010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:30.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:46:30.798712  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:30.799063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:30.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:31.298642  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.299084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:31.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:46:31.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:31.799089  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.298660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.298734  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:32.798689  118459 type.go:168] "Request Body" body=""
	I1008 14:46:32.798772  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:32.799169  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:32.799234  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:33.298791  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.298877  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:33.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:46:33.799101  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:33.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.299040  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.299520  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:34.799151  118459 type.go:168] "Request Body" body=""
	I1008 14:46:34.799224  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:34.799552  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:34.799606  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:35.299196  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.299279  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.299627  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:35.799293  118459 type.go:168] "Request Body" body=""
	I1008 14:46:35.799369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:35.799727  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.299400  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.299510  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.299857  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:36.799528  118459 type.go:168] "Request Body" body=""
	I1008 14:46:36.799601  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:36.799936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:36.799998  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:37.298659  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.298750  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.299094  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:37.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:37.798758  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:37.799112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.298715  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.298793  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.299167  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:38.799005  118459 type.go:168] "Request Body" body=""
	I1008 14:46:38.799084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:38.799470  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:39.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.299482  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:39.299547  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:39.799057  118459 type.go:168] "Request Body" body=""
	I1008 14:46:39.799149  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:39.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.299162  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.299239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.299588  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:40.799254  118459 type.go:168] "Request Body" body=""
	I1008 14:46:40.799325  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:40.799695  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:41.299348  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.299424  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.299798  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:41.299888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:41.799486  118459 type.go:168] "Request Body" body=""
	I1008 14:46:41.799571  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:41.799908  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.298659  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.299014  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:42.798601  118459 type.go:168] "Request Body" body=""
	I1008 14:46:42.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:42.799021  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.298597  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.298675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.299015  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:43.798645  118459 type.go:168] "Request Body" body=""
	I1008 14:46:43.798718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:43.799099  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:43.799158  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:44.298648  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.298727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.299079  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:44.798646  118459 type.go:168] "Request Body" body=""
	I1008 14:46:44.798720  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:44.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.298651  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.298724  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.299086  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:45.798658  118459 type.go:168] "Request Body" body=""
	I1008 14:46:45.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:45.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:45.799190  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:46.298664  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.298740  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.299081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:46.798660  118459 type.go:168] "Request Body" body=""
	I1008 14:46:46.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:46.799116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.299010  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.299116  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.299468  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:47.799058  118459 type.go:168] "Request Body" body=""
	I1008 14:46:47.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:47.799515  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:47.799577  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:48.299145  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.299237  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.299586  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:48.799465  118459 type.go:168] "Request Body" body=""
	I1008 14:46:48.799540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:48.799893  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.299567  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.300081  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:49.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:46:49.798774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:49.799156  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:50.298747  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.298852  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:50.299334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:50.798849  118459 type.go:168] "Request Body" body=""
	I1008 14:46:50.798940  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:50.799370  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.298974  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.299068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.299474  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:51.799088  118459 type.go:168] "Request Body" body=""
	I1008 14:46:51.799186  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:51.799617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:52.299319  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.299399  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.299750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:52.299815  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:52.799425  118459 type.go:168] "Request Body" body=""
	I1008 14:46:52.799532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:52.799968  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.298596  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.299057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:53.798951  118459 type.go:168] "Request Body" body=""
	I1008 14:46:53.799031  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:53.799358  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.298997  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.299141  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.299485  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:54.799052  118459 type.go:168] "Request Body" body=""
	I1008 14:46:54.799134  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:54.799494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:54.799557  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:55.299016  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.299103  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.299471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:55.799007  118459 type.go:168] "Request Body" body=""
	I1008 14:46:55.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:55.799427  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.299476  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:56.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:46:56.799071  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:56.799429  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:57.299385  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.299507  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.299911  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:57.299974  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:46:57.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:46:57.799621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:57.799954  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.298526  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.298614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.298971  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:58.798638  118459 type.go:168] "Request Body" body=""
	I1008 14:46:58.798717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:58.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.298676  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.299184  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:46:59.798757  118459 type.go:168] "Request Body" body=""
	I1008 14:46:59.798865  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:46:59.799194  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:46:59.799261  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:00.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.298867  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.299242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:00.798799  118459 type.go:168] "Request Body" body=""
	I1008 14:47:00.798882  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:00.799276  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.298869  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.298960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.299308  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:01.798868  118459 type.go:168] "Request Body" body=""
	I1008 14:47:01.798957  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:01.799328  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:01.799395  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:02.298910  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.299004  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.299367  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:02.798967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:02.799064  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:02.799471  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.299021  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.299109  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:03.799358  118459 type.go:168] "Request Body" body=""
	I1008 14:47:03.799437  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:03.799820  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:03.799888  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:04.299467  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.299570  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:04.798525  118459 type.go:168] "Request Body" body=""
	I1008 14:47:04.798605  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:04.798957  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.299064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:05.798652  118459 type.go:168] "Request Body" body=""
	I1008 14:47:05.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:05.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:06.298661  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.298755  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.299139  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:06.299201  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:06.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:06.798775  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:06.799212  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.299173  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.299680  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:07.799348  118459 type.go:168] "Request Body" body=""
	I1008 14:47:07.799431  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:07.799818  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:08.299466  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.299559  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.299887  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:08.299953  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:08.798622  118459 type.go:168] "Request Body" body=""
	I1008 14:47:08.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:08.799122  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.298666  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.298743  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.299110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:09.798680  118459 type.go:168] "Request Body" body=""
	I1008 14:47:09.798767  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:09.799118  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.298717  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.298823  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.299192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:10.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:47:10.798833  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:10.799192  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:10.799264  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:11.298772  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.298854  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.299193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:11.798748  118459 type.go:168] "Request Body" body=""
	I1008 14:47:11.798887  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:11.799274  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.298832  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.298912  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.299277  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:12.798808  118459 type.go:168] "Request Body" body=""
	I1008 14:47:12.798896  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:12.799275  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:12.799334  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:13.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.298906  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.299247  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:13.799086  118459 type.go:168] "Request Body" body=""
	I1008 14:47:13.799171  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:13.799549  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.299233  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.299317  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.299685  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:14.799321  118459 type.go:168] "Request Body" body=""
	I1008 14:47:14.799395  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:14.799748  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:14.799845  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:15.299364  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.299434  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.299756  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:15.799417  118459 type.go:168] "Request Body" body=""
	I1008 14:47:15.799517  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:15.799861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.299614  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.299915  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:16.798573  118459 type.go:168] "Request Body" body=""
	I1008 14:47:16.798648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:16.799007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:17.298827  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.299306  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:17.299381  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:17.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:47:17.798968  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:17.799302  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.298694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:18.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:47:18.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:18.799418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:19.299079  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.299153  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.299571  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:19.299630  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:19.799185  118459 type.go:168] "Request Body" body=""
	I1008 14:47:19.799262  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:19.799651  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.299313  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.299398  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.299801  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:20.799547  118459 type.go:168] "Request Body" body=""
	I1008 14:47:20.799634  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:20.800024  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.298611  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:21.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:21.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:21.799103  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:21.799168  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:22.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.298730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.299091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:22.798659  118459 type.go:168] "Request Body" body=""
	I1008 14:47:22.798732  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:22.799137  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.298704  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.298779  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.299115  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:23.798943  118459 type.go:168] "Request Body" body=""
	I1008 14:47:23.799042  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:23.799413  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:23.799509  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:24.298964  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.299040  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.299390  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:24.798583  118459 type.go:168] "Request Body" body=""
	I1008 14:47:24.798690  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:24.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.298624  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.298702  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.299069  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:25.798672  118459 type.go:168] "Request Body" body=""
	I1008 14:47:25.798756  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:25.799107  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:26.298675  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.298759  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.299125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:26.299192  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:26.798697  118459 type.go:168] "Request Body" body=""
	I1008 14:47:26.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:26.799142  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.299005  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.299090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.299419  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:27.799045  118459 type.go:168] "Request Body" body=""
	I1008 14:47:27.799137  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:27.799544  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:28.299142  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.299269  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.299617  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:28.299678  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:28.799473  118459 type.go:168] "Request Body" body=""
	I1008 14:47:28.799560  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:28.799899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.299557  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.299985  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:29.798543  118459 type.go:168] "Request Body" body=""
	I1008 14:47:29.798622  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:29.798983  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.298553  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.298632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.298995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:30.798615  118459 type.go:168] "Request Body" body=""
	I1008 14:47:30.798697  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:30.799110  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:30.799179  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:31.298618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.298695  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.299073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:31.798671  118459 type.go:168] "Request Body" body=""
	I1008 14:47:31.798745  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:31.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.298577  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.298648  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.298977  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:32.798588  118459 type.go:168] "Request Body" body=""
	I1008 14:47:32.798663  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:32.799041  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:33.298626  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.298701  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.299022  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:33.299097  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:33.798957  118459 type.go:168] "Request Body" body=""
	I1008 14:47:33.799041  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:33.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.299002  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.299095  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.299494  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:34.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:47:34.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:34.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:35.299241  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.299344  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:35.299795  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:35.799437  118459 type.go:168] "Request Body" body=""
	I1008 14:47:35.799530  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:35.799892  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.299548  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.299626  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.299964  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:36.798599  118459 type.go:168] "Request Body" body=""
	I1008 14:47:36.798674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:36.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.298967  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.299050  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.299424  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:37.798987  118459 type.go:168] "Request Body" body=""
	I1008 14:47:37.799063  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:37.799403  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:37.799496  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:38.298988  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.299067  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.299408  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:38.799345  118459 type.go:168] "Request Body" body=""
	I1008 14:47:38.799481  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:38.799859  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.299510  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.299593  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.299976  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:39.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:47:39.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:39.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:40.298711  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.298796  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.299180  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:40.299245  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:40.798752  118459 type.go:168] "Request Body" body=""
	I1008 14:47:40.798837  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:40.799193  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.298775  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.298853  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.299237  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:41.798864  118459 type.go:168] "Request Body" body=""
	I1008 14:47:41.798946  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:41.799303  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:42.298889  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.298962  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.299322  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:42.299384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:42.798944  118459 type.go:168] "Request Body" body=""
	I1008 14:47:42.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:42.799388  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.298977  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.299047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.299368  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:43.799221  118459 type.go:168] "Request Body" body=""
	I1008 14:47:43.799302  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:43.799663  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:44.299294  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.299790  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:44.299872  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:44.799433  118459 type.go:168] "Request Body" body=""
	I1008 14:47:44.799542  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:44.799888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.299563  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.299636  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.299993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:45.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:47:45.799202  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:45.799537  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:46.299512  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.299633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.300025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:46.300089  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:46.798790  118459 type.go:168] "Request Body" body=""
	I1008 14:47:46.798884  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:46.799229  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.299087  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.299184  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.299563  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:47.798932  118459 type.go:168] "Request Body" body=""
	I1008 14:47:47.799009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:47.799428  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.299029  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.299106  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.299501  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:48.799380  118459 type.go:168] "Request Body" body=""
	I1008 14:47:48.799486  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:48.799833  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:48.799903  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:49.299564  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.299643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.300007  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:49.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:47:49.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:49.799052  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.298600  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.299045  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:50.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:47:50.798715  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:50.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:51.298640  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.298722  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.299093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:51.299156  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:51.798681  118459 type.go:168] "Request Body" body=""
	I1008 14:47:51.798761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:51.799132  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.298710  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.298829  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.299205  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:52.798798  118459 type.go:168] "Request Body" body=""
	I1008 14:47:52.798883  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:52.799265  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:53.298856  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.298931  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:53.299362  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:53.799190  118459 type.go:168] "Request Body" body=""
	I1008 14:47:53.799266  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:53.799652  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.299296  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.299369  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:54.799472  118459 type.go:168] "Request Body" body=""
	I1008 14:47:54.799553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:54.799952  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.298584  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.298660  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.299029  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:55.798627  118459 type.go:168] "Request Body" body=""
	I1008 14:47:55.798713  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:55.799105  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:55.799173  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:56.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.298834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.299222  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:56.798788  118459 type.go:168] "Request Body" body=""
	I1008 14:47:56.798866  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:56.799242  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.299122  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.299199  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.299496  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:57.799239  118459 type.go:168] "Request Body" body=""
	I1008 14:47:57.799326  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:57.799714  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:47:57.799774  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:47:58.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.299464  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.299809  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:58.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:47:58.798672  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:58.799025  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.298591  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.298674  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.299040  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:47:59.798618  118459 type.go:168] "Request Body" body=""
	I1008 14:47:59.798694  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:47:59.799057  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:00.298633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.298709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.299101  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:00.299182  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:00.798633  118459 type.go:168] "Request Body" body=""
	I1008 14:48:00.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:00.799076  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.298687  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.298762  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.299124  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:01.798694  118459 type.go:168] "Request Body" body=""
	I1008 14:48:01.798782  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:01.799125  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.298730  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.298807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.299143  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:02.798743  118459 type.go:168] "Request Body" body=""
	I1008 14:48:02.798820  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:02.799178  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:02.799242  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:03.298766  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.299191  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:03.799090  118459 type.go:168] "Request Body" body=""
	I1008 14:48:03.799168  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:03.799556  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.298646  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.299049  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:04.798656  118459 type.go:168] "Request Body" body=""
	I1008 14:48:04.798746  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:04.799159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:05.298725  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.298803  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.299148  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:05.299215  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:05.798756  118459 type.go:168] "Request Body" body=""
	I1008 14:48:05.798859  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:05.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.298782  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.298856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.299228  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:06.798974  118459 type.go:168] "Request Body" body=""
	I1008 14:48:06.799046  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:06.799394  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:07.299194  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.299273  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:07.299732  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:07.799538  118459 type.go:168] "Request Body" body=""
	I1008 14:48:07.799609  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:07.799950  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.298682  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.299147  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:08.799044  118459 type.go:168] "Request Body" body=""
	I1008 14:48:08.799132  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:08.799521  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:09.299345  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.299428  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.299805  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:09.299871  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:09.798647  118459 type.go:168] "Request Body" body=""
	I1008 14:48:09.798727  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:09.799096  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.298815  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.298898  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:10.799063  118459 type.go:168] "Request Body" body=""
	I1008 14:48:10.799142  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:10.799548  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:11.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.299512  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.299861  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:11.299938  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:11.798636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:11.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:11.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.298858  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.298934  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.299293  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:12.798635  118459 type.go:168] "Request Body" body=""
	I1008 14:48:12.798709  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:12.799050  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.298773  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.298847  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.299208  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:13.799046  118459 type.go:168] "Request Body" body=""
	I1008 14:48:13.799118  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:13.799495  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:13.799564  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:14.299338  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.299418  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.299784  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:14.798558  118459 type.go:168] "Request Body" body=""
	I1008 14:48:14.798633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:14.798966  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.298689  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.298769  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.299111  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:15.798836  118459 type.go:168] "Request Body" body=""
	I1008 14:48:15.798919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:15.799244  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:16.299034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.299119  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.299472  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:16.299531  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:16.799263  118459 type.go:168] "Request Body" body=""
	I1008 14:48:16.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:16.799716  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.299535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.299984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:17.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:48:17.798714  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:17.799093  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.298690  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.298768  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.299127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:18.798926  118459 type.go:168] "Request Body" body=""
	I1008 14:48:18.799002  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:18.799340  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:18.799405  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:19.298954  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.299028  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.299371  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:19.798980  118459 type.go:168] "Request Body" body=""
	I1008 14:48:19.799068  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:19.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.298992  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.299074  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.299425  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:20.798994  118459 type.go:168] "Request Body" body=""
	I1008 14:48:20.799140  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:20.799508  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:20.799581  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:21.299202  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.299281  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.299656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:21.799334  118459 type.go:168] "Request Body" body=""
	I1008 14:48:21.799412  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:21.799779  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.299478  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.299564  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.299936  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:22.798566  118459 type.go:168] "Request Body" body=""
	I1008 14:48:22.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:22.798990  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:23.298580  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.298653  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:23.299069  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:23.798927  118459 type.go:168] "Request Body" body=""
	I1008 14:48:23.799024  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:23.799392  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.298958  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.299037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.299387  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:24.798959  118459 type.go:168] "Request Body" body=""
	I1008 14:48:24.799037  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:24.799400  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:25.299272  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.299346  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.299722  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:25.299785  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:25.799564  118459 type.go:168] "Request Body" body=""
	I1008 14:48:25.799644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:25.800010  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.298751  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.298851  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.299197  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:26.798945  118459 type.go:168] "Request Body" body=""
	I1008 14:48:26.799020  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:26.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:27.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.299365  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.299762  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:27.299828  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:27.799408  118459 type.go:168] "Request Body" body=""
	I1008 14:48:27.799498  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:27.799868  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.299505  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.299589  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.299938  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:28.798630  118459 type.go:168] "Request Body" body=""
	I1008 14:48:28.798710  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:28.799066  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.298603  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.298738  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.299072  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:29.798631  118459 type.go:168] "Request Body" body=""
	I1008 14:48:29.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:29.799067  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:29.799143  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:30.298652  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.298723  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.299058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:30.798639  118459 type.go:168] "Request Body" body=""
	I1008 14:48:30.798719  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:30.799061  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.298623  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.298696  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.299054  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:31.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:48:31.798739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:31.799087  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:32.298655  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.298728  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.299071  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:32.299152  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:32.798666  118459 type.go:168] "Request Body" body=""
	I1008 14:48:32.798747  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:32.799135  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.298695  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.298774  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.299116  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:33.798993  118459 type.go:168] "Request Body" body=""
	I1008 14:48:33.799069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:33.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:34.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.299476  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.299807  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:34.299873  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:34.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:34.798675  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:34.799083  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.298918  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.299259  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:35.799014  118459 type.go:168] "Request Body" body=""
	I1008 14:48:35.799088  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:35.799478  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.299309  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.299386  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.299754  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:36.798548  118459 type.go:168] "Request Body" body=""
	I1008 14:48:36.798627  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:36.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:36.799056  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:37.298853  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.298929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.299261  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:37.798581  118459 type.go:168] "Request Body" body=""
	I1008 14:48:37.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:37.799023  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.298605  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.298681  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.299063  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:38.799034  118459 type.go:168] "Request Body" body=""
	I1008 14:48:38.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:38.799534  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:38.799603  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:39.299424  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.299514  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.299862  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:39.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:48:39.798705  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:39.799092  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.298907  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.298997  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.299335  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:40.799204  118459 type.go:168] "Request Body" body=""
	I1008 14:48:40.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:40.799649  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:40.799728  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:41.299541  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.299632  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.299970  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:41.798741  118459 type.go:168] "Request Body" body=""
	I1008 14:48:41.798831  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:41.799187  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.298986  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.299069  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.299473  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:42.799301  118459 type.go:168] "Request Body" body=""
	I1008 14:48:42.799376  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:42.799728  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:42.799794  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:43.298557  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.298631  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.299030  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:43.798919  118459 type.go:168] "Request Body" body=""
	I1008 14:48:43.799001  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:43.799377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.299220  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.299306  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.299666  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:44.799308  118459 type.go:168] "Request Body" body=""
	I1008 14:48:44.799379  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:44.799750  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:45.299391  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.299504  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.299837  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:45.299906  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:45.799476  118459 type.go:168] "Request Body" body=""
	I1008 14:48:45.799562  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:45.799953  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.298535  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.298610  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.298988  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:46.798597  118459 type.go:168] "Request Body" body=""
	I1008 14:48:46.798683  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:46.799058  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.298928  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.299009  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.299377  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:47.798939  118459 type.go:168] "Request Body" body=""
	I1008 14:48:47.799014  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:47.799409  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:47.799500  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:48.299000  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.299084  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.299436  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:48.799313  118459 type.go:168] "Request Body" body=""
	I1008 14:48:48.799397  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:48.799757  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.299469  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.299546  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.299912  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:49.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:48:49.798748  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:49.799121  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:50.298729  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.298811  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.299173  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:50.299238  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:50.798775  118459 type.go:168] "Request Body" body=""
	I1008 14:48:50.798856  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:50.799248  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.298812  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.298897  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.299264  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:51.798948  118459 type.go:168] "Request Body" body=""
	I1008 14:48:51.799023  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:51.799386  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:52.298978  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.299070  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.299481  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:52.299545  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:52.799050  118459 type.go:168] "Request Body" body=""
	I1008 14:48:52.799126  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:52.799504  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.299161  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.299264  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.299675  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:53.799435  118459 type.go:168] "Request Body" body=""
	I1008 14:48:53.799534  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:53.799875  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.298636  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.298718  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.299112  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:54.798847  118459 type.go:168] "Request Body" body=""
	I1008 14:48:54.798929  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:54.799294  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:54.799357  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:55.299157  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.299235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.299606  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:55.799386  118459 type.go:168] "Request Body" body=""
	I1008 14:48:55.799470  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:55.799852  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.298612  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.298687  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.299065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:56.798779  118459 type.go:168] "Request Body" body=""
	I1008 14:48:56.798868  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:56.799243  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:57.299138  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.299227  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.299600  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:57.299666  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:57.799470  118459 type.go:168] "Request Body" body=""
	I1008 14:48:57.799545  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:57.799918  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.298679  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.298761  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.299149  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:58.799015  118459 type.go:168] "Request Body" body=""
	I1008 14:48:58.799090  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:58.799430  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:48:59.299293  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.299392  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.299742  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:48:59.299808  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:48:59.798577  118459 type.go:168] "Request Body" body=""
	I1008 14:48:59.798654  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:48:59.799065  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.298879  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.298973  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.299326  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:00.799153  118459 type.go:168] "Request Body" body=""
	I1008 14:49:00.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:00.799577  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:01.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.299553  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.299898  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:01.299965  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:01.798701  118459 type.go:168] "Request Body" body=""
	I1008 14:49:01.798778  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:01.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.298874  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.299315  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:02.799145  118459 type.go:168] "Request Body" body=""
	I1008 14:49:02.799228  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:02.799568  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.299396  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.299513  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:03.798557  118459 type.go:168] "Request Body" body=""
	I1008 14:49:03.798644  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:03.799073  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:03.799140  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:04.298885  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.298976  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.299401  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:04.799261  118459 type.go:168] "Request Body" body=""
	I1008 14:49:04.799342  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:04.799710  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.299549  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.299642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.300048  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:05.798774  118459 type.go:168] "Request Body" body=""
	I1008 14:49:05.798849  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:05.799206  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:05.799268  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:06.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.299053  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.299418  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:06.799240  118459 type.go:168] "Request Body" body=""
	I1008 14:49:06.799328  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:06.799681  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.299414  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.299532  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.299883  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:07.798625  118459 type.go:168] "Request Body" body=""
	I1008 14:49:07.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:07.799044  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:08.298825  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.298920  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.299292  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:08.299350  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:08.799137  118459 type.go:168] "Request Body" body=""
	I1008 14:49:08.799221  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:08.799589  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.299435  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.299540  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.299921  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:09.798654  118459 type.go:168] "Request Body" body=""
	I1008 14:49:09.798730  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:09.799064  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:10.298828  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.298925  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.299313  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:10.299380  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:10.799149  118459 type.go:168] "Request Body" body=""
	I1008 14:49:10.799223  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:10.799572  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.299419  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.299531  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.299928  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:11.798698  118459 type.go:168] "Request Body" body=""
	I1008 14:49:11.798777  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:11.799140  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:12.298875  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.298967  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.299357  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:12.299428  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:12.799215  118459 type.go:168] "Request Body" body=""
	I1008 14:49:12.799288  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:12.799641  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.299434  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.299538  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.299901  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:13.798587  118459 type.go:168] "Request Body" body=""
	I1008 14:49:13.798658  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:13.798993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.298718  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.298806  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.299190  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:14.798984  118459 type.go:168] "Request Body" body=""
	I1008 14:49:14.799091  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:14.799423  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:14.799511  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:15.299254  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.299343  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.299706  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:15.798574  118459 type.go:168] "Request Body" body=""
	I1008 14:49:15.798655  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:15.798992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.298700  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.298800  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.299145  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:16.798890  118459 type.go:168] "Request Body" body=""
	I1008 14:49:16.798966  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:16.799300  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:17.299095  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.299193  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.299535  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:17.299597  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:17.799262  118459 type.go:168] "Request Body" body=""
	I1008 14:49:17.799337  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:17.799684  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.299284  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.299383  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.299759  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:18.799524  118459 type.go:168] "Request Body" body=""
	I1008 14:49:18.799598  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:18.799939  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:19.299552  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.299638  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.299992  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:19.300058  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:19.798578  118459 type.go:168] "Request Body" body=""
	I1008 14:49:19.798656  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:19.799018  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.298569  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.298665  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.299002  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:20.798710  118459 type.go:168] "Request Body" body=""
	I1008 14:49:20.798789  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:20.799127  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.298846  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.298952  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.299301  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:21.799159  118459 type.go:168] "Request Body" body=""
	I1008 14:49:21.799239  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:21.799630  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:21.799697  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:22.299522  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.299619  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.299991  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:22.798758  118459 type.go:168] "Request Body" body=""
	I1008 14:49:22.798834  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:22.799181  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.298962  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.299061  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.299437  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:23.799357  118459 type.go:168] "Request Body" body=""
	I1008 14:49:23.799433  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:23.799786  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:23.799850  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:24.298547  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.298642  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.298993  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:24.798746  118459 type.go:168] "Request Body" body=""
	I1008 14:49:24.798835  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:24.799161  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.298901  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.298996  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.299334  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:25.799154  118459 type.go:168] "Request Body" body=""
	I1008 14:49:25.799236  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:25.799604  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:26.299399  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.299521  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.299888  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:26.299960  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:26.798629  118459 type.go:168] "Request Body" body=""
	I1008 14:49:26.798708  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:26.799035  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.298805  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.298901  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.299256  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:27.798972  118459 type.go:168] "Request Body" body=""
	I1008 14:49:27.799047  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:27.799378  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.299186  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.299286  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.299665  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:28.799521  118459 type.go:168] "Request Body" body=""
	I1008 14:49:28.799616  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:28.800091  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:28.800170  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:29.298943  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.299021  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.299362  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:29.799176  118459 type.go:168] "Request Body" body=""
	I1008 14:49:29.799282  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:29.799656  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.299485  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.299566  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.299899  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:30.798586  118459 type.go:168] "Request Body" body=""
	I1008 14:49:30.798676  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:30.799028  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:31.298771  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.298842  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.299157  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:31.299210  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:31.798882  118459 type.go:168] "Request Body" body=""
	I1008 14:49:31.798989  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:31.799342  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.299195  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.299278  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.299631  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:32.799405  118459 type.go:168] "Request Body" body=""
	I1008 14:49:32.799515  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:32.799866  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.298635  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.298717  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.299051  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:33.798843  118459 type.go:168] "Request Body" body=""
	I1008 14:49:33.798922  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:33.799266  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:33.799342  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:34.299019  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.299102  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.299432  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:34.799270  118459 type.go:168] "Request Body" body=""
	I1008 14:49:34.799358  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:34.799712  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.299543  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.299621  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.299995  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:35.798712  118459 type.go:168] "Request Body" body=""
	I1008 14:49:35.798807  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:35.799171  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:36.298656  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.298739  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.299113  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:36.299199  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:36.798682  118459 type.go:168] "Request Body" body=""
	I1008 14:49:36.798766  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:36.799131  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.299039  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.299115  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.299475  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:37.799319  118459 type.go:168] "Request Body" body=""
	I1008 14:49:37.799403  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:37.799767  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.298555  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.298633  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.298999  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:38.798634  118459 type.go:168] "Request Body" body=""
	I1008 14:49:38.798716  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:38.799060  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:38.799123  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:39.298837  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.298919  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.299318  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:39.799162  118459 type.go:168] "Request Body" body=""
	I1008 14:49:39.799235  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:39.799585  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.299409  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.299508  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.299869  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:40.798649  118459 type.go:168] "Request Body" body=""
	I1008 14:49:40.798726  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:40.799084  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:40.799144  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:41.298831  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.298921  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:41.799026  118459 type.go:168] "Request Body" body=""
	I1008 14:49:41.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:41.799516  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.299377  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.299467  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.299819  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:42.798568  118459 type.go:168] "Request Body" body=""
	I1008 14:49:42.798643  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:42.798984  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:43.298738  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.298822  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.299257  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:43.299318  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:43.799035  118459 type.go:168] "Request Body" body=""
	I1008 14:49:43.799111  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:43.799483  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.299311  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.299382  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.299773  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:44.798575  118459 type.go:168] "Request Body" body=""
	I1008 14:49:44.798649  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:44.799012  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.298748  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.298824  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.299159  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:45.798886  118459 type.go:168] "Request Body" body=""
	I1008 14:49:45.798960  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:45.799321  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1008 14:49:45.799384  118459 node_ready.go:55] error getting node "functional-367186" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-367186": dial tcp 192.168.49.2:8441: connect: connection refused
	I1008 14:49:46.299022  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.299330  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.299733  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:46.798742  118459 type.go:168] "Request Body" body=""
	I1008 14:49:46.798830  118459 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-367186" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1008 14:49:46.799234  118459 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1008 14:49:47.299129  118459 type.go:168] "Request Body" body=""
	I1008 14:49:47.299208  118459 node_ready.go:38] duration metric: took 6m0.000826952s for node "functional-367186" to be "Ready" ...
	I1008 14:49:47.302039  118459 out.go:203] 
	W1008 14:49:47.303804  118459 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 14:49:47.303820  118459 out.go:285] * 
	W1008 14:49:47.305511  118459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 14:49:47.306606  118459 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.643106269Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=27a26596-df15-4422-b397-5213400c194d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:57 functional-367186 crio[2943]: time="2025-10-08T14:49:57.643144425Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=27a26596-df15-4422-b397-5213400c194d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:58 functional-367186 crio[2943]: time="2025-10-08T14:49:58.090211707Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=aedb1671-958e-490e-8b22-b06bf378bfd2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.436638443Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=0780a6a7-9ff8-4768-b851-5587c9cf2d5c name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.437555815Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a0abb578-de69-41c4-9120-ccb53127c977 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.438478306Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=2b906656-7767-4e3b-93e7-08a98750db2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.438705906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.441896231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.442278474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.456661806Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2b906656-7767-4e3b-93e7-08a98750db2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.458042397Z" level=info msg="createCtr: deleting container ID 4dfe11145432070a4d98cbdd3173dd026508077df814b74a085d9ca49be9c026 from idIndex" id=2b906656-7767-4e3b-93e7-08a98750db2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.458096804Z" level=info msg="createCtr: removing container 4dfe11145432070a4d98cbdd3173dd026508077df814b74a085d9ca49be9c026" id=2b906656-7767-4e3b-93e7-08a98750db2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.458150367Z" level=info msg="createCtr: deleting container 4dfe11145432070a4d98cbdd3173dd026508077df814b74a085d9ca49be9c026 from storage" id=2b906656-7767-4e3b-93e7-08a98750db2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:49:59 functional-367186 crio[2943]: time="2025-10-08T14:49:59.460131362Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=2b906656-7767-4e3b-93e7-08a98750db2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.436295133Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=98eaf806-c917-440f-ae95-ecfa9c43c0d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.437219433Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f39b92dd-e8d2-4c53-8a7b-098f490aa676 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.438211219Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=54d0c365-f49e-4f85-b1eb-d86ad3d550fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.438436291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.441894584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.442375803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.456427389Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=54d0c365-f49e-4f85-b1eb-d86ad3d550fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.457944772Z" level=info msg="createCtr: deleting container ID a80526e61b3accabc2e929a1ec7755928b54374a6b1ff55b3537a3061b1c9f13 from idIndex" id=54d0c365-f49e-4f85-b1eb-d86ad3d550fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.457990285Z" level=info msg="createCtr: removing container a80526e61b3accabc2e929a1ec7755928b54374a6b1ff55b3537a3061b1c9f13" id=54d0c365-f49e-4f85-b1eb-d86ad3d550fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.458036787Z" level=info msg="createCtr: deleting container a80526e61b3accabc2e929a1ec7755928b54374a6b1ff55b3537a3061b1c9f13 from storage" id=54d0c365-f49e-4f85-b1eb-d86ad3d550fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 14:50:00 functional-367186 crio[2943]: time="2025-10-08T14:50:00.461628532Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=54d0c365-f49e-4f85-b1eb-d86ad3d550fd name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:50:01.608420    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:50:01.609007    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:50:01.610640    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:50:01.611153    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:50:01.612225    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 14:50:01 up  2:32,  0 user,  load average: 0.39, 0.12, 0.46
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 14:49:53 functional-367186 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:53 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:53 functional-367186 kubelet[1801]: E1008 14:49:53.462337    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 14:49:56 functional-367186 kubelet[1801]: E1008 14:49:56.115790    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 14:49:56 functional-367186 kubelet[1801]: I1008 14:49:56.330389    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 14:49:56 functional-367186 kubelet[1801]: E1008 14:49:56.330779    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 14:49:57 functional-367186 kubelet[1801]: E1008 14:49:57.460164    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.436100    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.460419    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:49:59 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:59 functional-367186 kubelet[1801]:  > podSandboxID="4f5c4547ba25f8047b1a01ec096a800bad6487d4d0d0fe8fd4a152424b0efbf9"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.460550    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:49:59 functional-367186 kubelet[1801]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:49:59 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:49:59 functional-367186 kubelet[1801]: E1008 14:49:59.460587    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	Oct 08 14:50:00 functional-367186 kubelet[1801]: E1008 14:50:00.435866    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 14:50:00 functional-367186 kubelet[1801]: E1008 14:50:00.461971    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 14:50:00 functional-367186 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:50:00 functional-367186 kubelet[1801]:  > podSandboxID="4a13bc9351a22b93554dcee46226666905c4e1638ab46a476341d1435096d9d8"
	Oct 08 14:50:00 functional-367186 kubelet[1801]: E1008 14:50:00.462083    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 14:50:00 functional-367186 kubelet[1801]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 14:50:00 functional-367186 kubelet[1801]:  > logger="UnhandledError"
	Oct 08 14:50:00 functional-367186 kubelet[1801]: E1008 14:50:00.462117    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 14:50:01 functional-367186 kubelet[1801]: E1008 14:50:01.069489    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-367186.186c8afed11699ef\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8afed11699ef  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:39:41.429266927 +0000 UTC m=+0.550355432,LastTimestamp:2025-10-08 14:39:41.43072231 +0000 UTC m=+0.551810801,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-367186,}"
	Oct 08 14:50:01 functional-367186 kubelet[1801]: E1008 14:50:01.484758    1801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (294.188981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (734.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-367186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-367186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m12.472403376s)

                                                
                                                
-- stdout --
	* [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.0016053s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-367186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m12.474741937s for "functional-367186" cluster.
I1008 15:02:14.918898   98900 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (303.227665ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                              │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ -p functional-367186 --alsologtostderr -v=8                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:43 UTC │                     │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.1                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.3                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:latest                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add minikube-local-cache-test:functional-367186                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache delete minikube-local-cache-test:functional-367186                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl images                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ cache   │ functional-367186 cache reload                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ kubectl │ functional-367186 kubectl -- --context functional-367186 get pods                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ start   │ -p functional-367186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:50:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:50:02.487614  124886 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:50:02.487885  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.487890  124886 out.go:374] Setting ErrFile to fd 2...
	I1008 14:50:02.487894  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.488148  124886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:50:02.488703  124886 out.go:368] Setting JSON to false
	I1008 14:50:02.489732  124886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9153,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:50:02.489824  124886 start.go:141] virtualization: kvm guest
	I1008 14:50:02.491855  124886 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:50:02.493271  124886 notify.go:220] Checking for updates...
	I1008 14:50:02.493279  124886 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:50:02.494598  124886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:50:02.495836  124886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:50:02.497242  124886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:50:02.498624  124886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:50:02.499973  124886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:50:02.501897  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:02.502018  124886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:50:02.525193  124886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:50:02.525315  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.584022  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.573926988 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.584110  124886 docker.go:318] overlay module found
	I1008 14:50:02.585968  124886 out.go:179] * Using the docker driver based on existing profile
	I1008 14:50:02.587279  124886 start.go:305] selected driver: docker
	I1008 14:50:02.587288  124886 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.587409  124886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:50:02.587529  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.641632  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.631975419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.642294  124886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:50:02.642317  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:02.642374  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:02.642409  124886 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.644427  124886 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:50:02.645877  124886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:50:02.647092  124886 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:50:02.648224  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:02.648254  124886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:50:02.648262  124886 cache.go:58] Caching tarball of preloaded images
	I1008 14:50:02.648344  124886 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:50:02.648340  124886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:50:02.648350  124886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:50:02.648438  124886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:50:02.667989  124886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:50:02.668000  124886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:50:02.668014  124886 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:50:02.668041  124886 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:50:02.668096  124886 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "functional-367186"
	I1008 14:50:02.668109  124886 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:50:02.668113  124886 fix.go:54] fixHost starting: 
	I1008 14:50:02.668337  124886 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:50:02.684543  124886 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:50:02.684562  124886 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:50:02.686414  124886 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:50:02.686441  124886 machine.go:93] provisionDockerMachine start ...
	I1008 14:50:02.686552  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.704251  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.704482  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.704488  124886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:50:02.850612  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:02.850631  124886 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:50:02.850683  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.868208  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.868417  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.868424  124886 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:50:03.024186  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:03.024255  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.041071  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.041277  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.041288  124886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:50:03.186253  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:50:03.186270  124886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:50:03.186287  124886 ubuntu.go:190] setting up certificates
	I1008 14:50:03.186296  124886 provision.go:84] configureAuth start
	I1008 14:50:03.186366  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:03.203498  124886 provision.go:143] copyHostCerts
	I1008 14:50:03.203554  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:50:03.203567  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:50:03.203633  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:50:03.203728  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:50:03.203738  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:50:03.203764  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:50:03.203811  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:50:03.203815  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:50:03.203835  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:50:03.203891  124886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:50:03.342698  124886 provision.go:177] copyRemoteCerts
	I1008 14:50:03.342747  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:50:03.342789  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.359931  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.462754  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:50:03.480100  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:50:03.497218  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:50:03.514367  124886 provision.go:87] duration metric: took 328.059175ms to configureAuth
	I1008 14:50:03.514387  124886 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:50:03.514597  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:03.514714  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.531920  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.532136  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.532149  124886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:50:03.804333  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:50:03.804348  124886 machine.go:96] duration metric: took 1.117888769s to provisionDockerMachine
	I1008 14:50:03.804358  124886 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:50:03.804366  124886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:50:03.804425  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:50:03.804490  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.822222  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.925021  124886 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:50:03.928570  124886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:50:03.928586  124886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:50:03.928595  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:50:03.928648  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:50:03.928714  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:50:03.928776  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:50:03.928851  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:50:03.936383  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:03.953682  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:50:03.970665  124886 start.go:296] duration metric: took 166.291312ms for postStartSetup
	I1008 14:50:03.970729  124886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:03.970760  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.987625  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.086669  124886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:50:04.091298  124886 fix.go:56] duration metric: took 1.423178254s for fixHost
	I1008 14:50:04.091311  124886 start.go:83] releasing machines lock for "functional-367186", held for 1.423209484s
	I1008 14:50:04.091360  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:04.107787  124886 ssh_runner.go:195] Run: cat /version.json
	I1008 14:50:04.107823  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.107871  124886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:50:04.107944  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.125505  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.126027  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.277012  124886 ssh_runner.go:195] Run: systemctl --version
	I1008 14:50:04.283607  124886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:50:04.317281  124886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:50:04.322127  124886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:50:04.322186  124886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:50:04.329933  124886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:50:04.329948  124886 start.go:495] detecting cgroup driver to use...
	I1008 14:50:04.329985  124886 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:50:04.330037  124886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:50:04.344088  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:50:04.355897  124886 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:50:04.355934  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:50:04.370666  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:50:04.383061  124886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:50:04.469185  124886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:50:04.555865  124886 docker.go:234] disabling docker service ...
	I1008 14:50:04.555933  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:50:04.571649  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:50:04.585004  124886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:50:04.673830  124886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:50:04.762936  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:50:04.775689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:50:04.790127  124886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:50:04.790172  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.799414  124886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:50:04.799484  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.808366  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.816703  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.825175  124886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:50:04.833160  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.842121  124886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.850355  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.859028  124886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:50:04.866049  124886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:50:04.873109  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:04.955543  124886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:50:05.069798  124886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:50:05.069856  124886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:50:05.074109  124886 start.go:563] Will wait 60s for crictl version
	I1008 14:50:05.074171  124886 ssh_runner.go:195] Run: which crictl
	I1008 14:50:05.077741  124886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:50:05.103519  124886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:50:05.103581  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.131061  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.160549  124886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:50:05.161770  124886 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:50:05.178428  124886 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:50:05.184282  124886 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1008 14:50:05.185372  124886 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:50:05.185532  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:05.185581  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.219145  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.219157  124886 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:50:05.219203  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.244747  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.244760  124886 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:50:05.244766  124886 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:50:05.244868  124886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:50:05.244932  124886 ssh_runner.go:195] Run: crio config
	I1008 14:50:05.290552  124886 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1008 14:50:05.290627  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:05.290634  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:05.290643  124886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:50:05.290661  124886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:50:05.290774  124886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:50:05.290829  124886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:50:05.299112  124886 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:50:05.299181  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:50:05.307519  124886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:50:05.319796  124886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:50:05.331988  124886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1008 14:50:05.344225  124886 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:50:05.347910  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:05.434760  124886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:50:05.447481  124886 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:50:05.447496  124886 certs.go:195] generating shared ca certs ...
	I1008 14:50:05.447517  124886 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:50:05.447665  124886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:50:05.447699  124886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:50:05.447705  124886 certs.go:257] generating profile certs ...
	I1008 14:50:05.447783  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:50:05.447822  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:50:05.447852  124886 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:50:05.447956  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:50:05.447979  124886 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:50:05.447984  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:50:05.448004  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:50:05.448022  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:50:05.448039  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:50:05.448072  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:05.448723  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:50:05.466280  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:50:05.482753  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:50:05.499451  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:50:05.516010  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:50:05.532903  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:50:05.549460  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:50:05.566552  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:50:05.584248  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:50:05.601250  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:50:05.618600  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:50:05.636280  124886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:50:05.648959  124886 ssh_runner.go:195] Run: openssl version
	I1008 14:50:05.655372  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:50:05.664552  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668508  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668554  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.702319  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:50:05.710597  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:50:05.719238  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722899  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722944  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.756814  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:50:05.765232  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:50:05.773915  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777582  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777627  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.811974  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:50:05.820369  124886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:50:05.824309  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:50:05.858210  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:50:05.892122  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:50:05.926997  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:50:05.961508  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:50:05.996031  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:50:06.030615  124886 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:06.030703  124886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:50:06.030782  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.059591  124886 cri.go:89] found id: ""
	I1008 14:50:06.059641  124886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:50:06.068127  124886 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:50:06.068151  124886 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:50:06.068205  124886 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:50:06.076226  124886 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.076725  124886 kubeconfig.go:125] found "functional-367186" server: "https://192.168.49.2:8441"
	I1008 14:50:06.077896  124886 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:50:06.086029  124886 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-08 14:35:34.873718023 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-08 14:50:05.341579042 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1008 14:50:06.086044  124886 kubeadm.go:1160] stopping kube-system containers ...
	I1008 14:50:06.086056  124886 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 14:50:06.086094  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.113178  124886 cri.go:89] found id: ""
	I1008 14:50:06.113245  124886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 14:50:06.155234  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:50:06.163592  124886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  8 14:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  8 14:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Oct  8 14:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  8 14:39 /etc/kubernetes/scheduler.conf
	
	I1008 14:50:06.163642  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:50:06.171483  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:50:06.179293  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.179397  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:50:06.186779  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.194154  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.194203  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.201651  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:50:06.209487  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.209530  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:50:06.217108  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:50:06.224828  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:06.265674  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.277477  124886 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011762147s)
	I1008 14:50:07.277533  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.443820  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.494457  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.547380  124886 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:50:07.547460  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.047610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.547636  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.047603  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.548254  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.047862  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.548513  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.048225  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.548074  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.048566  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.548179  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.047805  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.548258  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.048373  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.047544  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.548496  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.048492  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.548115  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.548277  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.047671  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.048049  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.547809  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.047855  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.547915  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.048015  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.547746  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.048353  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.548289  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.048071  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.547643  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.047912  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.548519  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.047801  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.547748  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.048322  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.548153  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.047657  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.547721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.047652  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.047871  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.548380  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.047959  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.548581  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.047957  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.547650  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.048117  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.547561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.048296  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.547881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.047870  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.548272  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.548487  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.047562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.547999  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.048398  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.547939  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.048434  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.547918  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.048433  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.548054  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.048329  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.548100  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.047697  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.548386  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.047561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.548546  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.048286  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.547793  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.048077  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.547717  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.048220  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.548251  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.047634  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.548172  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.048591  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.548428  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.048515  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.547901  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.048572  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.548237  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.047859  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.548570  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.047742  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.548274  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.047802  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.548510  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.047998  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.547560  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.047723  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.547955  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.048562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.547549  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.047984  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.547945  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.048426  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.547582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.048058  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.548196  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.048582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.548046  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.047563  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.047699  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.547610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.048374  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.548211  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.048533  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.548306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:07.548386  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:07.574942  124886 cri.go:89] found id: ""
	I1008 14:51:07.574974  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.574982  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:07.574988  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:07.575052  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:07.600942  124886 cri.go:89] found id: ""
	I1008 14:51:07.600957  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.600964  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:07.600968  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:07.601020  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:07.627307  124886 cri.go:89] found id: ""
	I1008 14:51:07.627324  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.627331  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:07.627336  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:07.627388  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:07.653908  124886 cri.go:89] found id: ""
	I1008 14:51:07.653925  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.653933  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:07.653938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:07.653988  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:07.681787  124886 cri.go:89] found id: ""
	I1008 14:51:07.681806  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.681814  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:07.681818  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:07.681881  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:07.707870  124886 cri.go:89] found id: ""
	I1008 14:51:07.707886  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.707892  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:07.707898  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:07.707955  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:07.734640  124886 cri.go:89] found id: ""
	I1008 14:51:07.734655  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.734662  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:07.734673  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:07.734682  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:07.804699  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:07.804721  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:07.819273  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:07.819290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:07.875686  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:07.875696  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:07.875709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:07.940091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:07.940122  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:10.470645  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:10.481694  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:10.481739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:10.506817  124886 cri.go:89] found id: ""
	I1008 14:51:10.506832  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.506839  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:10.506843  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:10.506898  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:10.531484  124886 cri.go:89] found id: ""
	I1008 14:51:10.531499  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.531506  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:10.531511  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:10.531558  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:10.557249  124886 cri.go:89] found id: ""
	I1008 14:51:10.557268  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.557277  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:10.557282  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:10.557333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:10.582779  124886 cri.go:89] found id: ""
	I1008 14:51:10.582797  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.582833  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:10.582838  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:10.582908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:10.608584  124886 cri.go:89] found id: ""
	I1008 14:51:10.608599  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.608606  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:10.608610  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:10.608653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:10.634540  124886 cri.go:89] found id: ""
	I1008 14:51:10.634557  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.634567  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:10.634573  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:10.634635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:10.659510  124886 cri.go:89] found id: ""
	I1008 14:51:10.659526  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.659532  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:10.659541  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:10.659552  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:10.727322  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:10.727344  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:10.741862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:10.741882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:10.798339  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:10.798350  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:10.798362  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:10.862340  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:10.862363  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.392975  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:13.404098  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:13.404165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:13.430215  124886 cri.go:89] found id: ""
	I1008 14:51:13.430231  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.430237  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:13.430242  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:13.430283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:13.455821  124886 cri.go:89] found id: ""
	I1008 14:51:13.455837  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.455844  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:13.455853  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:13.455903  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:13.482279  124886 cri.go:89] found id: ""
	I1008 14:51:13.482296  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.482316  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:13.482321  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:13.482366  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:13.508868  124886 cri.go:89] found id: ""
	I1008 14:51:13.508883  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.508893  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:13.508900  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:13.508957  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:13.534938  124886 cri.go:89] found id: ""
	I1008 14:51:13.534954  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.534960  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:13.534964  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:13.535012  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:13.562594  124886 cri.go:89] found id: ""
	I1008 14:51:13.562611  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.562620  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:13.562626  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:13.562683  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:13.588476  124886 cri.go:89] found id: ""
	I1008 14:51:13.588493  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.588505  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:13.588513  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:13.588522  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.617969  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:13.617996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:13.687989  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:13.688010  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:13.702556  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:13.702577  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:13.758238  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:13.758274  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:13.758288  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.324420  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:16.335355  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:16.335413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:16.361211  124886 cri.go:89] found id: ""
	I1008 14:51:16.361227  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.361233  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:16.361238  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:16.361283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:16.388154  124886 cri.go:89] found id: ""
	I1008 14:51:16.388170  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.388176  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:16.388180  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:16.388234  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:16.414515  124886 cri.go:89] found id: ""
	I1008 14:51:16.414532  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.414539  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:16.414545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:16.414606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:16.441112  124886 cri.go:89] found id: ""
	I1008 14:51:16.441130  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.441137  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:16.441143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:16.441196  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:16.467403  124886 cri.go:89] found id: ""
	I1008 14:51:16.467423  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.467434  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:16.467439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:16.467515  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:16.493912  124886 cri.go:89] found id: ""
	I1008 14:51:16.493994  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.494017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:16.494025  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:16.494086  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:16.520736  124886 cri.go:89] found id: ""
	I1008 14:51:16.520754  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.520761  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:16.520770  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:16.520784  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:16.578205  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:16.578222  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:16.578237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.641639  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:16.641661  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:16.671073  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:16.671090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:16.740879  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:16.740901  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.256721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:19.267621  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:19.267671  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:19.293587  124886 cri.go:89] found id: ""
	I1008 14:51:19.293605  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.293611  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:19.293616  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:19.293661  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:19.318866  124886 cri.go:89] found id: ""
	I1008 14:51:19.318886  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.318898  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:19.318905  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:19.318973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:19.344646  124886 cri.go:89] found id: ""
	I1008 14:51:19.344660  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.344668  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:19.344673  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:19.344730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:19.370979  124886 cri.go:89] found id: ""
	I1008 14:51:19.370994  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.371001  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:19.371006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:19.371049  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:19.398115  124886 cri.go:89] found id: ""
	I1008 14:51:19.398134  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.398144  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:19.398149  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:19.398205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:19.425579  124886 cri.go:89] found id: ""
	I1008 14:51:19.425594  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.425602  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:19.425606  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:19.425664  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:19.451179  124886 cri.go:89] found id: ""
	I1008 14:51:19.451194  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.451201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:19.451209  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:19.451219  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:19.515409  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:19.515430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.530193  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:19.530208  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:19.587513  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:19.587527  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:19.587538  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:19.650244  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:19.650266  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:22.181221  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:22.192437  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:22.192530  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:22.218691  124886 cri.go:89] found id: ""
	I1008 14:51:22.218709  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.218717  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:22.218722  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:22.218784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:22.245011  124886 cri.go:89] found id: ""
	I1008 14:51:22.245028  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.245035  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:22.245040  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:22.245087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:22.271669  124886 cri.go:89] found id: ""
	I1008 14:51:22.271698  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.271706  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:22.271710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:22.271775  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:22.298500  124886 cri.go:89] found id: ""
	I1008 14:51:22.298520  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.298529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:22.298537  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:22.298598  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:22.324858  124886 cri.go:89] found id: ""
	I1008 14:51:22.324873  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.324879  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:22.324883  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:22.324930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:22.351540  124886 cri.go:89] found id: ""
	I1008 14:51:22.351556  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.351563  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:22.351568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:22.351613  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:22.377421  124886 cri.go:89] found id: ""
	I1008 14:51:22.377458  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.377470  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:22.377482  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:22.377497  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:22.450410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:22.450465  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:22.465230  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:22.465257  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:22.521387  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:22.521398  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:22.521409  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:22.586462  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:22.586490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.117667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:25.129264  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:25.129309  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:25.155977  124886 cri.go:89] found id: ""
	I1008 14:51:25.155998  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.156007  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:25.156016  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:25.156090  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:25.183268  124886 cri.go:89] found id: ""
	I1008 14:51:25.183288  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.183297  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:25.183302  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:25.183355  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:25.209728  124886 cri.go:89] found id: ""
	I1008 14:51:25.209745  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.209752  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:25.209763  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:25.209807  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:25.236946  124886 cri.go:89] found id: ""
	I1008 14:51:25.236961  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.236968  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:25.236974  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:25.237017  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:25.263116  124886 cri.go:89] found id: ""
	I1008 14:51:25.263132  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.263138  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:25.263143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:25.263189  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:25.288378  124886 cri.go:89] found id: ""
	I1008 14:51:25.288395  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.288401  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:25.288406  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:25.288460  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:25.315195  124886 cri.go:89] found id: ""
	I1008 14:51:25.315210  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.315217  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:25.315225  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:25.315237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:25.371376  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:25.371387  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:25.371396  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:25.435272  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:25.435294  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.465980  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:25.465996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:25.535450  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:25.535477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.050276  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:28.061620  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:28.061668  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:28.088245  124886 cri.go:89] found id: ""
	I1008 14:51:28.088265  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.088274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:28.088278  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:28.088326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:28.113839  124886 cri.go:89] found id: ""
	I1008 14:51:28.113859  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.113870  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:28.113876  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:28.113940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:28.141395  124886 cri.go:89] found id: ""
	I1008 14:51:28.141414  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.141423  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:28.141429  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:28.141503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:28.168333  124886 cri.go:89] found id: ""
	I1008 14:51:28.168348  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.168354  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:28.168360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:28.168413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:28.192847  124886 cri.go:89] found id: ""
	I1008 14:51:28.192864  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.192870  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:28.192876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:28.192936  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:28.218780  124886 cri.go:89] found id: ""
	I1008 14:51:28.218795  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.218801  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:28.218806  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:28.218875  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:28.244592  124886 cri.go:89] found id: ""
	I1008 14:51:28.244612  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.244622  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:28.244631  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:28.244643  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:28.315714  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:28.315736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.329938  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:28.329954  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:28.387618  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:28.387629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:28.387641  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:28.453202  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:28.453224  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:30.984664  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:30.995891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:30.995939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:31.022304  124886 cri.go:89] found id: ""
	I1008 14:51:31.022328  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.022338  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:31.022344  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:31.022401  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:31.049041  124886 cri.go:89] found id: ""
	I1008 14:51:31.049060  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.049069  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:31.049075  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:31.049123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:31.076924  124886 cri.go:89] found id: ""
	I1008 14:51:31.076940  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.076949  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:31.076953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:31.077003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:31.102922  124886 cri.go:89] found id: ""
	I1008 14:51:31.102942  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.102950  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:31.102955  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:31.103003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:31.131223  124886 cri.go:89] found id: ""
	I1008 14:51:31.131237  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.131244  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:31.131248  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:31.131294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:31.157335  124886 cri.go:89] found id: ""
	I1008 14:51:31.157350  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.157356  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:31.157361  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:31.157403  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:31.183539  124886 cri.go:89] found id: ""
	I1008 14:51:31.183556  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.183563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:31.183571  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:31.183582  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:31.254970  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:31.254991  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:31.269535  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:31.269556  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:31.325660  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:31.325690  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:31.325702  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:31.390180  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:31.390201  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:33.920121  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:33.931525  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:33.931580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:33.956578  124886 cri.go:89] found id: ""
	I1008 14:51:33.956594  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.956601  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:33.956606  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:33.956652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:33.983065  124886 cri.go:89] found id: ""
	I1008 14:51:33.983083  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.983094  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:33.983100  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:33.983176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:34.009180  124886 cri.go:89] found id: ""
	I1008 14:51:34.009198  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.009206  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:34.009211  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:34.009266  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:34.035120  124886 cri.go:89] found id: ""
	I1008 14:51:34.035138  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.035145  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:34.035151  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:34.035207  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:34.060490  124886 cri.go:89] found id: ""
	I1008 14:51:34.060506  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.060512  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:34.060517  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:34.060565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:34.086320  124886 cri.go:89] found id: ""
	I1008 14:51:34.086338  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.086346  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:34.086351  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:34.086394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:34.111862  124886 cri.go:89] found id: ""
	I1008 14:51:34.111883  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.111893  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:34.111902  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:34.111921  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:34.181743  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:34.181765  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:34.196152  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:34.196171  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:34.252034  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:34.252045  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:34.252056  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:34.316760  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:34.316781  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:36.845595  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:36.856603  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:36.856648  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:36.883175  124886 cri.go:89] found id: ""
	I1008 14:51:36.883194  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.883202  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:36.883209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:36.883267  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:36.910081  124886 cri.go:89] found id: ""
	I1008 14:51:36.910096  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.910103  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:36.910107  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:36.910157  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:36.935036  124886 cri.go:89] found id: ""
	I1008 14:51:36.935051  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.935062  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:36.935068  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:36.935122  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:36.961981  124886 cri.go:89] found id: ""
	I1008 14:51:36.961998  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.962009  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:36.962016  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:36.962126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:36.989270  124886 cri.go:89] found id: ""
	I1008 14:51:36.989290  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.989299  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:36.989306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:36.989363  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:37.016135  124886 cri.go:89] found id: ""
	I1008 14:51:37.016153  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.016161  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:37.016165  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:37.016215  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:37.043172  124886 cri.go:89] found id: ""
	I1008 14:51:37.043191  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.043201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:37.043211  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:37.043227  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:37.100326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:37.100338  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:37.100351  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:37.163756  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:37.163777  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:37.193435  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:37.193471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:37.260908  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:37.260933  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:39.777967  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:39.789007  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:39.789059  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:39.815862  124886 cri.go:89] found id: ""
	I1008 14:51:39.815879  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.815886  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:39.815890  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:39.815942  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:39.841950  124886 cri.go:89] found id: ""
	I1008 14:51:39.841966  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.841973  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:39.841979  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:39.842039  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:39.868668  124886 cri.go:89] found id: ""
	I1008 14:51:39.868686  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.868696  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:39.868702  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:39.868755  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:39.895534  124886 cri.go:89] found id: ""
	I1008 14:51:39.895554  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.895564  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:39.895571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:39.895622  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:39.922579  124886 cri.go:89] found id: ""
	I1008 14:51:39.922598  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.922608  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:39.922614  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:39.922660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:39.948340  124886 cri.go:89] found id: ""
	I1008 14:51:39.948356  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.948363  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:39.948367  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:39.948410  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:39.975730  124886 cri.go:89] found id: ""
	I1008 14:51:39.975746  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.975752  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:39.975761  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:39.975771  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:40.004995  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:40.005014  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:40.075523  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:40.075546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:40.090104  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:40.090120  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:40.147226  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:40.147238  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:40.147253  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:42.711983  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:42.723356  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:42.723413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:42.749822  124886 cri.go:89] found id: ""
	I1008 14:51:42.749838  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.749844  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:42.749849  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:42.749917  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:42.776397  124886 cri.go:89] found id: ""
	I1008 14:51:42.776414  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.776421  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:42.776425  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:42.776493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:42.802489  124886 cri.go:89] found id: ""
	I1008 14:51:42.802508  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.802518  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:42.802524  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:42.802572  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:42.829172  124886 cri.go:89] found id: ""
	I1008 14:51:42.829187  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.829193  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:42.829198  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:42.829251  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:42.853534  124886 cri.go:89] found id: ""
	I1008 14:51:42.853552  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.853561  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:42.853568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:42.853635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:42.879567  124886 cri.go:89] found id: ""
	I1008 14:51:42.879583  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.879595  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:42.879601  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:42.879652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:42.904961  124886 cri.go:89] found id: ""
	I1008 14:51:42.904979  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.904986  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:42.904993  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:42.905009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:42.974363  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:42.974384  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:42.989172  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:42.989192  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:43.045247  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:43.045260  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:43.045275  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:43.106406  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:43.106429  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:45.637311  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:45.648040  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:45.648095  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:45.673462  124886 cri.go:89] found id: ""
	I1008 14:51:45.673481  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.673491  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:45.673497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:45.673550  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:45.698163  124886 cri.go:89] found id: ""
	I1008 14:51:45.698181  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.698188  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:45.698193  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:45.698246  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:45.723467  124886 cri.go:89] found id: ""
	I1008 14:51:45.723561  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.723573  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:45.723581  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:45.723641  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:45.748702  124886 cri.go:89] found id: ""
	I1008 14:51:45.748717  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.748726  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:45.748732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:45.748796  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:45.775585  124886 cri.go:89] found id: ""
	I1008 14:51:45.775604  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.775612  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:45.775617  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:45.775670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:45.801010  124886 cri.go:89] found id: ""
	I1008 14:51:45.801025  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.801031  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:45.801036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:45.801084  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:45.827042  124886 cri.go:89] found id: ""
	I1008 14:51:45.827059  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.827067  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:45.827075  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:45.827086  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:45.895458  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:45.895480  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:45.910085  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:45.910109  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:45.966571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:45.966593  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:45.966605  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:46.027581  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:46.027606  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:48.557168  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:48.568079  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:48.568130  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:48.594574  124886 cri.go:89] found id: ""
	I1008 14:51:48.594594  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.594603  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:48.594609  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:48.594653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:48.621962  124886 cri.go:89] found id: ""
	I1008 14:51:48.621977  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.621984  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:48.621989  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:48.622035  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:48.648065  124886 cri.go:89] found id: ""
	I1008 14:51:48.648080  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.648087  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:48.648091  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:48.648146  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:48.675285  124886 cri.go:89] found id: ""
	I1008 14:51:48.675300  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.675307  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:48.675311  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:48.675356  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:48.701191  124886 cri.go:89] found id: ""
	I1008 14:51:48.701210  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.701218  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:48.701225  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:48.701271  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:48.729042  124886 cri.go:89] found id: ""
	I1008 14:51:48.729069  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.729079  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:48.729086  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:48.729136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:48.754548  124886 cri.go:89] found id: ""
	I1008 14:51:48.754564  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.754572  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:48.754580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:48.754590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:48.822673  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:48.822705  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:48.836997  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:48.837017  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:48.894196  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:48.894212  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:48.894223  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:48.955101  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:48.955127  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.487365  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:51.498554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:51.498603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:51.525066  124886 cri.go:89] found id: ""
	I1008 14:51:51.525081  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.525088  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:51.525094  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:51.525147  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:51.550909  124886 cri.go:89] found id: ""
	I1008 14:51:51.550926  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.550933  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:51.550938  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:51.550989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:51.576844  124886 cri.go:89] found id: ""
	I1008 14:51:51.576860  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.576867  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:51.576871  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:51.576919  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:51.603876  124886 cri.go:89] found id: ""
	I1008 14:51:51.603894  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.603900  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:51.603907  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:51.603958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:51.630518  124886 cri.go:89] found id: ""
	I1008 14:51:51.630533  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.630540  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:51.630545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:51.630591  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:51.656592  124886 cri.go:89] found id: ""
	I1008 14:51:51.656625  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.656634  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:51.656641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:51.656686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:51.682732  124886 cri.go:89] found id: ""
	I1008 14:51:51.682750  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.682757  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:51.682766  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:51.682775  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:51.742589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:51.742612  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.771353  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:51.771369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:51.842948  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:51.842971  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:51.857862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:51.857882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:51.915551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.417267  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:54.428273  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:54.428333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:54.454016  124886 cri.go:89] found id: ""
	I1008 14:51:54.454030  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.454037  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:54.454042  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:54.454097  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:54.479088  124886 cri.go:89] found id: ""
	I1008 14:51:54.479104  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.479112  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:54.479117  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:54.479171  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:54.504383  124886 cri.go:89] found id: ""
	I1008 14:51:54.504401  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.504411  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:54.504418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:54.504481  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:54.530502  124886 cri.go:89] found id: ""
	I1008 14:51:54.530522  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.530529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:54.530534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:54.530578  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:54.556899  124886 cri.go:89] found id: ""
	I1008 14:51:54.556920  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.556929  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:54.556935  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:54.556983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:54.582860  124886 cri.go:89] found id: ""
	I1008 14:51:54.582878  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.582888  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:54.582895  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:54.582954  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:54.609653  124886 cri.go:89] found id: ""
	I1008 14:51:54.609670  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.609679  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:54.609689  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:54.609704  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:54.666095  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.666106  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:54.666116  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:54.725670  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:54.725693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:54.755377  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:54.755394  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:54.824839  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:54.824860  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.340378  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:57.351013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:57.351087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:57.377174  124886 cri.go:89] found id: ""
	I1008 14:51:57.377192  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.377201  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:57.377208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:57.377259  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:57.403239  124886 cri.go:89] found id: ""
	I1008 14:51:57.403254  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.403261  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:57.403271  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:57.403317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:57.429149  124886 cri.go:89] found id: ""
	I1008 14:51:57.429168  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.429179  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:57.429185  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:57.429244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:57.454095  124886 cri.go:89] found id: ""
	I1008 14:51:57.454114  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.454128  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:57.454133  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:57.454187  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:57.479640  124886 cri.go:89] found id: ""
	I1008 14:51:57.479658  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.479665  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:57.479670  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:57.479725  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:57.505776  124886 cri.go:89] found id: ""
	I1008 14:51:57.505795  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.505805  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:57.505811  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:57.505853  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:57.531837  124886 cri.go:89] found id: ""
	I1008 14:51:57.531852  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.531860  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:57.531867  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:57.531878  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:57.599522  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:57.599544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.614111  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:57.614132  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:57.671063  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:57.671074  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:57.671084  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:57.732027  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:57.732050  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:00.263338  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:00.274100  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:00.274167  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:00.299677  124886 cri.go:89] found id: ""
	I1008 14:52:00.299692  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.299698  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:00.299703  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:00.299744  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:00.325037  124886 cri.go:89] found id: ""
	I1008 14:52:00.325055  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.325065  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:00.325071  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:00.325128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:00.351372  124886 cri.go:89] found id: ""
	I1008 14:52:00.351388  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.351397  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:00.351402  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:00.351465  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:00.377746  124886 cri.go:89] found id: ""
	I1008 14:52:00.377761  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.377767  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:00.377772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:00.377838  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:00.403806  124886 cri.go:89] found id: ""
	I1008 14:52:00.403821  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.403827  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:00.403832  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:00.403888  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:00.431653  124886 cri.go:89] found id: ""
	I1008 14:52:00.431673  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.431682  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:00.431687  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:00.431732  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:00.458706  124886 cri.go:89] found id: ""
	I1008 14:52:00.458720  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.458727  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:00.458735  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:00.458744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:00.527333  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:00.527355  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:00.545238  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:00.545260  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:00.604166  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:00.604178  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:00.604190  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:00.667338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:00.667360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.196993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:03.207677  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:03.207730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:03.232932  124886 cri.go:89] found id: ""
	I1008 14:52:03.232952  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.232963  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:03.232969  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:03.233019  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:03.257910  124886 cri.go:89] found id: ""
	I1008 14:52:03.257927  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.257934  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:03.257939  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:03.257989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:03.282476  124886 cri.go:89] found id: ""
	I1008 14:52:03.282491  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.282498  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:03.282503  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:03.282556  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:03.307994  124886 cri.go:89] found id: ""
	I1008 14:52:03.308009  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.308016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:03.308020  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:03.308066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:03.333961  124886 cri.go:89] found id: ""
	I1008 14:52:03.333978  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.333985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:03.333990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:03.334036  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:03.360461  124886 cri.go:89] found id: ""
	I1008 14:52:03.360480  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.360491  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:03.360498  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:03.360546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:03.385935  124886 cri.go:89] found id: ""
	I1008 14:52:03.385951  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.385958  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:03.385965  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:03.385980  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:03.399673  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:03.399689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:03.456423  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:03.456433  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:03.456459  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:03.519728  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:03.519750  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.549347  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:03.549365  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.121403  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:06.132277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:06.132329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:06.158234  124886 cri.go:89] found id: ""
	I1008 14:52:06.158248  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.158255  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:06.158260  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:06.158308  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:06.184118  124886 cri.go:89] found id: ""
	I1008 14:52:06.184136  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.184145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:06.184151  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:06.184201  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:06.210586  124886 cri.go:89] found id: ""
	I1008 14:52:06.210604  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.210613  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:06.210619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:06.210682  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:06.236986  124886 cri.go:89] found id: ""
	I1008 14:52:06.237004  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.237013  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:06.237018  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:06.237064  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:06.264151  124886 cri.go:89] found id: ""
	I1008 14:52:06.264172  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.264182  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:06.264188  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:06.264240  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:06.290106  124886 cri.go:89] found id: ""
	I1008 14:52:06.290120  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.290126  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:06.290132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:06.290177  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:06.316419  124886 cri.go:89] found id: ""
	I1008 14:52:06.316435  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.316453  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:06.316464  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:06.316477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:06.377522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:06.377544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:06.407056  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:06.407075  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.474318  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:06.474342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:06.488482  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:06.488502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:06.546904  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.048569  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:09.059380  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:09.059436  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:09.085888  124886 cri.go:89] found id: ""
	I1008 14:52:09.085906  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.085912  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:09.085918  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:09.085971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:09.113858  124886 cri.go:89] found id: ""
	I1008 14:52:09.113875  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.113882  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:09.113892  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:09.113939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:09.140388  124886 cri.go:89] found id: ""
	I1008 14:52:09.140407  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.140414  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:09.140420  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:09.140493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:09.168003  124886 cri.go:89] found id: ""
	I1008 14:52:09.168018  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.168025  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:09.168030  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:09.168075  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:09.194655  124886 cri.go:89] found id: ""
	I1008 14:52:09.194681  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.194690  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:09.194696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:09.194757  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:09.221388  124886 cri.go:89] found id: ""
	I1008 14:52:09.221405  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.221411  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:09.221416  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:09.221490  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:09.247075  124886 cri.go:89] found id: ""
	I1008 14:52:09.247093  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.247102  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:09.247122  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:09.247133  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:09.304638  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.304650  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:09.304664  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:09.368718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:09.368742  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:09.399217  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:09.399239  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:09.468608  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:09.468629  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:11.984769  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:11.995534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:11.995596  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:12.020218  124886 cri.go:89] found id: ""
	I1008 14:52:12.020234  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.020241  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:12.020247  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:12.020289  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:12.045959  124886 cri.go:89] found id: ""
	I1008 14:52:12.045978  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.045989  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:12.045996  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:12.046103  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:12.072101  124886 cri.go:89] found id: ""
	I1008 14:52:12.072118  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.072125  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:12.072129  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:12.072174  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:12.098793  124886 cri.go:89] found id: ""
	I1008 14:52:12.098808  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.098814  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:12.098819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:12.098871  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:12.124876  124886 cri.go:89] found id: ""
	I1008 14:52:12.124891  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.124900  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:12.124906  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:12.124973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:12.151678  124886 cri.go:89] found id: ""
	I1008 14:52:12.151695  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.151703  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:12.151708  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:12.151764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:12.176969  124886 cri.go:89] found id: ""
	I1008 14:52:12.176986  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.176994  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:12.177004  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:12.177019  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:12.247581  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:12.247604  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:12.262272  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:12.262290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:12.319283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:12.319306  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:12.319318  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:12.383384  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:12.383406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:14.914713  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:14.925495  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:14.925548  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:14.951182  124886 cri.go:89] found id: ""
	I1008 14:52:14.951197  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.951205  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:14.951209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:14.951265  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:14.978925  124886 cri.go:89] found id: ""
	I1008 14:52:14.978941  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.978948  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:14.978953  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:14.979004  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:15.003964  124886 cri.go:89] found id: ""
	I1008 14:52:15.003983  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.003992  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:15.003997  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:15.004061  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:15.030077  124886 cri.go:89] found id: ""
	I1008 14:52:15.030095  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.030102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:15.030107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:15.030154  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:15.055689  124886 cri.go:89] found id: ""
	I1008 14:52:15.055704  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.055711  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:15.055715  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:15.055760  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:15.081174  124886 cri.go:89] found id: ""
	I1008 14:52:15.081191  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.081198  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:15.081203  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:15.081262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:15.107235  124886 cri.go:89] found id: ""
	I1008 14:52:15.107251  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.107257  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:15.107265  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:15.107279  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:15.174130  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:15.174161  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:15.188435  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:15.188471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:15.244706  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:15.244720  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:15.244735  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:15.305071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:15.305098  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:17.835094  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:17.845787  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:17.845870  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:17.871734  124886 cri.go:89] found id: ""
	I1008 14:52:17.871749  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.871757  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:17.871764  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:17.871823  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:17.897412  124886 cri.go:89] found id: ""
	I1008 14:52:17.897433  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.897458  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:17.897467  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:17.897535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:17.925096  124886 cri.go:89] found id: ""
	I1008 14:52:17.925110  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.925117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:17.925122  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:17.925168  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:17.951272  124886 cri.go:89] found id: ""
	I1008 14:52:17.951289  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.951297  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:17.951301  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:17.951347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:17.976965  124886 cri.go:89] found id: ""
	I1008 14:52:17.976985  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.976992  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:17.976998  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:17.977042  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:18.003041  124886 cri.go:89] found id: ""
	I1008 14:52:18.003057  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.003064  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:18.003069  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:18.003113  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:18.028732  124886 cri.go:89] found id: ""
	I1008 14:52:18.028748  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.028756  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:18.028764  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:18.028774  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:18.092440  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:18.092467  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:18.121965  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:18.121984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:18.191653  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:18.191679  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:18.205820  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:18.205839  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:18.261002  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:20.762706  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:20.773592  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:20.773660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:20.799324  124886 cri.go:89] found id: ""
	I1008 14:52:20.799340  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.799347  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:20.799352  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:20.799394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:20.825415  124886 cri.go:89] found id: ""
	I1008 14:52:20.825430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.825436  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:20.825452  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:20.825504  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:20.851415  124886 cri.go:89] found id: ""
	I1008 14:52:20.851430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.851437  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:20.851454  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:20.851503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:20.878438  124886 cri.go:89] found id: ""
	I1008 14:52:20.878476  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.878484  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:20.878489  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:20.878536  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:20.903857  124886 cri.go:89] found id: ""
	I1008 14:52:20.903873  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.903884  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:20.903890  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:20.903948  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:20.930746  124886 cri.go:89] found id: ""
	I1008 14:52:20.930763  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.930770  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:20.930791  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:20.930842  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:20.956487  124886 cri.go:89] found id: ""
	I1008 14:52:20.956504  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.956510  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:20.956518  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:20.956528  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:21.026065  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:21.026087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:21.040112  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:21.040129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:21.095891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:21.095902  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:21.095914  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:21.159107  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:21.159129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:23.687668  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:23.698250  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:23.698317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:23.723805  124886 cri.go:89] found id: ""
	I1008 14:52:23.723832  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.723842  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:23.723850  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:23.723900  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:23.749813  124886 cri.go:89] found id: ""
	I1008 14:52:23.749831  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.749840  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:23.749847  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:23.749918  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:23.774918  124886 cri.go:89] found id: ""
	I1008 14:52:23.774934  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.774940  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:23.774945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:23.774999  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:23.800898  124886 cri.go:89] found id: ""
	I1008 14:52:23.800918  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.800925  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:23.800930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:23.800978  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:23.827330  124886 cri.go:89] found id: ""
	I1008 14:52:23.827348  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.827356  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:23.827360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:23.827405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:23.853485  124886 cri.go:89] found id: ""
	I1008 14:52:23.853503  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.853510  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:23.853515  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:23.853560  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:23.878936  124886 cri.go:89] found id: ""
	I1008 14:52:23.878957  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.878967  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:23.878976  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:23.878994  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:23.934831  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:23.934841  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:23.934851  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:23.993858  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:23.993885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:24.022945  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:24.022962  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:24.092836  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:24.092865  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.608369  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:26.619983  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:26.620060  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:26.646593  124886 cri.go:89] found id: ""
	I1008 14:52:26.646611  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.646621  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:26.646627  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:26.646678  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:26.673294  124886 cri.go:89] found id: ""
	I1008 14:52:26.673310  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.673317  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:26.673324  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:26.673367  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:26.699235  124886 cri.go:89] found id: ""
	I1008 14:52:26.699251  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.699257  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:26.699262  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:26.699320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:26.724993  124886 cri.go:89] found id: ""
	I1008 14:52:26.725009  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.725016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:26.725021  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:26.725074  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:26.749744  124886 cri.go:89] found id: ""
	I1008 14:52:26.749760  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.749767  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:26.749772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:26.749821  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:26.775226  124886 cri.go:89] found id: ""
	I1008 14:52:26.775246  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.775255  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:26.775260  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:26.775316  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:26.805104  124886 cri.go:89] found id: ""
	I1008 14:52:26.805120  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.805128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:26.805136  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:26.805152  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:26.834601  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:26.834618  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:26.900340  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:26.900361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.914389  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:26.914406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:26.969896  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:26.969911  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:26.969927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.531143  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:29.542884  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:29.542952  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:29.570323  124886 cri.go:89] found id: ""
	I1008 14:52:29.570339  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.570345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:29.570350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:29.570395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:29.596735  124886 cri.go:89] found id: ""
	I1008 14:52:29.596750  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.596756  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:29.596762  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:29.596811  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:29.622878  124886 cri.go:89] found id: ""
	I1008 14:52:29.622892  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.622898  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:29.622903  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:29.622950  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:29.648836  124886 cri.go:89] found id: ""
	I1008 14:52:29.648857  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.648880  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:29.648887  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:29.648939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:29.674729  124886 cri.go:89] found id: ""
	I1008 14:52:29.674747  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.674753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:29.674758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:29.674802  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:29.700542  124886 cri.go:89] found id: ""
	I1008 14:52:29.700558  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.700565  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:29.700571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:29.700615  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:29.726353  124886 cri.go:89] found id: ""
	I1008 14:52:29.726369  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.726375  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:29.726383  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:29.726395  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:29.790538  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:29.790560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:29.805071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:29.805087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:29.861336  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:29.861354  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:29.861367  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.921484  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:29.921507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.452001  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:32.462783  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:32.462839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:32.488895  124886 cri.go:89] found id: ""
	I1008 14:52:32.488913  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.488922  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:32.488929  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:32.488977  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:32.514655  124886 cri.go:89] found id: ""
	I1008 14:52:32.514674  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.514683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:32.514688  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:32.514739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:32.542007  124886 cri.go:89] found id: ""
	I1008 14:52:32.542027  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.542037  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:32.542044  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:32.542100  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:32.569946  124886 cri.go:89] found id: ""
	I1008 14:52:32.569963  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.569970  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:32.569976  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:32.570022  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:32.595032  124886 cri.go:89] found id: ""
	I1008 14:52:32.595051  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.595061  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:32.595066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:32.595127  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:32.621883  124886 cri.go:89] found id: ""
	I1008 14:52:32.621903  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.621923  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:32.621930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:32.621983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:32.647589  124886 cri.go:89] found id: ""
	I1008 14:52:32.647606  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.647612  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:32.647620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:32.647630  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:32.703098  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:32.703108  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:32.703129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:32.766481  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:32.766502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.794530  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:32.794546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:32.864662  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:32.864687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.381050  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:35.391807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:35.391868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:35.418369  124886 cri.go:89] found id: ""
	I1008 14:52:35.418388  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.418397  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:35.418402  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:35.418467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:35.444660  124886 cri.go:89] found id: ""
	I1008 14:52:35.444676  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.444683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:35.444687  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:35.444736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:35.471158  124886 cri.go:89] found id: ""
	I1008 14:52:35.471183  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.471190  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:35.471195  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:35.471238  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:35.496271  124886 cri.go:89] found id: ""
	I1008 14:52:35.496288  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.496295  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:35.496300  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:35.496345  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:35.521987  124886 cri.go:89] found id: ""
	I1008 14:52:35.522005  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.522015  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:35.522039  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:35.522098  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:35.547647  124886 cri.go:89] found id: ""
	I1008 14:52:35.547664  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.547673  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:35.547678  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:35.547723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:35.573056  124886 cri.go:89] found id: ""
	I1008 14:52:35.573075  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.573085  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:35.573109  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:35.573123  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:35.640898  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:35.640923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.655247  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:35.655265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:35.712555  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:35.712565  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:35.712575  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:35.772556  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:35.772579  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.301881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:38.312627  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:38.312694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:38.337192  124886 cri.go:89] found id: ""
	I1008 14:52:38.337210  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.337220  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:38.337227  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:38.337278  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:38.361703  124886 cri.go:89] found id: ""
	I1008 14:52:38.361721  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.361730  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:38.361736  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:38.361786  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:38.387263  124886 cri.go:89] found id: ""
	I1008 14:52:38.387279  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.387286  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:38.387290  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:38.387334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:38.413808  124886 cri.go:89] found id: ""
	I1008 14:52:38.413824  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.413830  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:38.413835  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:38.413880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:38.440014  124886 cri.go:89] found id: ""
	I1008 14:52:38.440029  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.440036  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:38.440041  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:38.440085  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:38.466144  124886 cri.go:89] found id: ""
	I1008 14:52:38.466164  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.466174  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:38.466181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:38.466229  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:38.491536  124886 cri.go:89] found id: ""
	I1008 14:52:38.491554  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.491563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:38.491573  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:38.491584  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.520248  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:38.520265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:38.588833  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:38.588861  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:38.603136  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:38.603155  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:38.659278  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:38.659290  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:38.659301  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.224716  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:41.235550  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:41.235600  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:41.261421  124886 cri.go:89] found id: ""
	I1008 14:52:41.261436  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.261455  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:41.261463  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:41.261516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:41.286798  124886 cri.go:89] found id: ""
	I1008 14:52:41.286813  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.286839  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:41.286844  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:41.286904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:41.312542  124886 cri.go:89] found id: ""
	I1008 14:52:41.312558  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.312567  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:41.312574  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:41.312623  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:41.339001  124886 cri.go:89] found id: ""
	I1008 14:52:41.339016  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.339022  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:41.339027  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:41.339073  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:41.365019  124886 cri.go:89] found id: ""
	I1008 14:52:41.365040  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.365049  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:41.365056  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:41.365115  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:41.389878  124886 cri.go:89] found id: ""
	I1008 14:52:41.389897  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.389904  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:41.389910  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:41.389960  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:41.415856  124886 cri.go:89] found id: ""
	I1008 14:52:41.415875  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.415884  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:41.415895  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:41.415909  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:41.481175  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:41.481196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:41.495356  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:41.495373  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:41.552891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:41.552910  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:41.552927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.615245  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:41.615282  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:44.146351  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:44.157234  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:44.157294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:44.183016  124886 cri.go:89] found id: ""
	I1008 14:52:44.183032  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.183039  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:44.183044  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:44.183094  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:44.209452  124886 cri.go:89] found id: ""
	I1008 14:52:44.209471  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.209480  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:44.209487  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:44.209535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:44.236057  124886 cri.go:89] found id: ""
	I1008 14:52:44.236079  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.236088  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:44.236094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:44.236165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:44.262249  124886 cri.go:89] found id: ""
	I1008 14:52:44.262265  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.262274  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:44.262281  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:44.262333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:44.288222  124886 cri.go:89] found id: ""
	I1008 14:52:44.288240  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.288249  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:44.288254  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:44.288303  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:44.312991  124886 cri.go:89] found id: ""
	I1008 14:52:44.313009  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.313017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:44.313022  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:44.313066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:44.338794  124886 cri.go:89] found id: ""
	I1008 14:52:44.338814  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.338823  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:44.338835  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:44.338849  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:44.408632  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:44.408655  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:44.423360  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:44.423381  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:44.481035  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:44.481052  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:44.481068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:44.545061  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:44.545093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.075772  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:47.086739  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:47.086782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:47.112465  124886 cri.go:89] found id: ""
	I1008 14:52:47.112483  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.112492  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:47.112497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:47.112546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:47.140124  124886 cri.go:89] found id: ""
	I1008 14:52:47.140139  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.140145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:47.140150  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:47.140194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:47.167347  124886 cri.go:89] found id: ""
	I1008 14:52:47.167366  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.167376  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:47.167382  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:47.167428  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:47.193008  124886 cri.go:89] found id: ""
	I1008 14:52:47.193025  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.193032  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:47.193037  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:47.193081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:47.218907  124886 cri.go:89] found id: ""
	I1008 14:52:47.218922  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.218932  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:47.218938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:47.218992  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:47.244390  124886 cri.go:89] found id: ""
	I1008 14:52:47.244406  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.244413  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:47.244418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:47.244485  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:47.270432  124886 cri.go:89] found id: ""
	I1008 14:52:47.270460  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.270473  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:47.270482  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:47.270496  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:47.284419  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:47.284434  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:47.340814  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:47.340829  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:47.340840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:47.405347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:47.405371  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.434675  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:47.434693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:50.001509  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:50.012521  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:50.012580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:50.038871  124886 cri.go:89] found id: ""
	I1008 14:52:50.038886  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.038895  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:50.038901  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:50.038945  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:50.065691  124886 cri.go:89] found id: ""
	I1008 14:52:50.065707  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.065713  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:50.065718  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:50.065764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:50.091421  124886 cri.go:89] found id: ""
	I1008 14:52:50.091439  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.091459  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:50.091466  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:50.091516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:50.117900  124886 cri.go:89] found id: ""
	I1008 14:52:50.117916  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.117922  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:50.117927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:50.117971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:50.143795  124886 cri.go:89] found id: ""
	I1008 14:52:50.143811  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.143837  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:50.143842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:50.143889  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:50.170009  124886 cri.go:89] found id: ""
	I1008 14:52:50.170025  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.170032  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:50.170036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:50.170081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:50.195182  124886 cri.go:89] found id: ""
	I1008 14:52:50.195198  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.195204  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:50.195213  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:50.195226  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:50.208906  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:50.208923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:50.263732  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:50.263744  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:50.263754  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:50.321967  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:50.321990  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:50.350825  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:50.350843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:52.919243  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:52.929975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:52.930069  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:52.956423  124886 cri.go:89] found id: ""
	I1008 14:52:52.956439  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.956463  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:52.956470  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:52.956519  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:52.982128  124886 cri.go:89] found id: ""
	I1008 14:52:52.982143  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.982150  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:52.982155  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:52.982204  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:53.008335  124886 cri.go:89] found id: ""
	I1008 14:52:53.008351  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.008358  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:53.008363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:53.008416  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:53.035683  124886 cri.go:89] found id: ""
	I1008 14:52:53.035698  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.035705  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:53.035710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:53.035753  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:53.061482  124886 cri.go:89] found id: ""
	I1008 14:52:53.061590  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.061610  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:53.061619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:53.061673  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:53.088358  124886 cri.go:89] found id: ""
	I1008 14:52:53.088375  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.088384  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:53.088390  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:53.088467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:53.113970  124886 cri.go:89] found id: ""
	I1008 14:52:53.113988  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.113995  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:53.114003  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:53.114016  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:53.181486  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:53.181511  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:53.195603  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:53.195620  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:53.251571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:53.251582  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:53.251592  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:53.312589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:53.312610  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:55.843180  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:55.854192  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:55.854250  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:55.878967  124886 cri.go:89] found id: ""
	I1008 14:52:55.878984  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.878992  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:55.878997  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:55.879050  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:55.904136  124886 cri.go:89] found id: ""
	I1008 14:52:55.904151  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.904157  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:55.904174  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:55.904216  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:55.928319  124886 cri.go:89] found id: ""
	I1008 14:52:55.928337  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.928348  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:55.928353  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:55.928406  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:55.955314  124886 cri.go:89] found id: ""
	I1008 14:52:55.955330  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.955338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:55.955345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:55.955405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:55.980957  124886 cri.go:89] found id: ""
	I1008 14:52:55.980976  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.980985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:55.980992  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:55.981040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:56.006492  124886 cri.go:89] found id: ""
	I1008 14:52:56.006507  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.006514  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:56.006519  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:56.006566  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:56.032919  124886 cri.go:89] found id: ""
	I1008 14:52:56.032934  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.032940  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:56.032948  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:56.032960  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:56.061693  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:56.061713  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:56.127262  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:56.127284  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:56.141728  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:56.141744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:56.197783  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:56.197799  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:56.197815  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:58.759309  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:58.770096  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:58.770150  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:58.796177  124886 cri.go:89] found id: ""
	I1008 14:52:58.796192  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.796199  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:58.796208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:58.796260  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:58.821988  124886 cri.go:89] found id: ""
	I1008 14:52:58.822006  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.822013  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:58.822018  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:58.822068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:58.847935  124886 cri.go:89] found id: ""
	I1008 14:52:58.847953  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.847961  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:58.847966  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:58.848015  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:58.874796  124886 cri.go:89] found id: ""
	I1008 14:52:58.874814  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.874821  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:58.874826  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:58.874880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:58.899925  124886 cri.go:89] found id: ""
	I1008 14:52:58.899941  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.899948  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:58.899953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:58.900008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:58.926934  124886 cri.go:89] found id: ""
	I1008 14:52:58.926950  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.926958  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:58.926963  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:58.927006  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:58.953664  124886 cri.go:89] found id: ""
	I1008 14:52:58.953680  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.953687  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:58.953694  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:58.953709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:59.010616  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:59.010629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:59.010640  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:59.071358  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:59.071382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:59.099863  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:59.099886  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:59.168071  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:59.168163  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.684667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:01.695456  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:01.695524  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:01.721627  124886 cri.go:89] found id: ""
	I1008 14:53:01.721644  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.721652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:01.721656  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:01.721715  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:01.748495  124886 cri.go:89] found id: ""
	I1008 14:53:01.748512  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.748518  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:01.748523  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:01.748583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:01.774281  124886 cri.go:89] found id: ""
	I1008 14:53:01.774298  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.774310  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:01.774316  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:01.774377  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:01.800414  124886 cri.go:89] found id: ""
	I1008 14:53:01.800430  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.800437  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:01.800458  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:01.800513  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:01.825727  124886 cri.go:89] found id: ""
	I1008 14:53:01.825746  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.825753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:01.825758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:01.825804  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:01.852777  124886 cri.go:89] found id: ""
	I1008 14:53:01.852794  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.852802  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:01.852807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:01.852855  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:01.879499  124886 cri.go:89] found id: ""
	I1008 14:53:01.879516  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.879522  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:01.879530  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:01.879542  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:01.908367  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:01.908386  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:01.976337  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:01.976358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.990844  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:01.990863  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:02.047840  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:02.047852  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:02.047864  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.612824  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:04.623886  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:04.623937  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:04.650245  124886 cri.go:89] found id: ""
	I1008 14:53:04.650265  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.650274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:04.650282  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:04.650338  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:04.675795  124886 cri.go:89] found id: ""
	I1008 14:53:04.675814  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.675849  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:04.675856  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:04.675910  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:04.701855  124886 cri.go:89] found id: ""
	I1008 14:53:04.701874  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.701883  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:04.701889  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:04.701951  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:04.727569  124886 cri.go:89] found id: ""
	I1008 14:53:04.727584  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.727590  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:04.727595  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:04.727637  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:04.753254  124886 cri.go:89] found id: ""
	I1008 14:53:04.753269  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.753276  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:04.753280  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:04.753329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:04.779529  124886 cri.go:89] found id: ""
	I1008 14:53:04.779548  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.779557  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:04.779564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:04.779611  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:04.806307  124886 cri.go:89] found id: ""
	I1008 14:53:04.806326  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.806335  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:04.806346  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:04.806361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:04.820357  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:04.820374  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:04.876718  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:04.876732  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:04.876748  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.940387  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:04.940412  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:04.969994  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:04.970009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.538422  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:07.550831  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:07.550884  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:07.577673  124886 cri.go:89] found id: ""
	I1008 14:53:07.577687  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.577693  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:07.577698  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:07.577750  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:07.603662  124886 cri.go:89] found id: ""
	I1008 14:53:07.603680  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.603695  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:07.603700  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:07.603746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:07.629802  124886 cri.go:89] found id: ""
	I1008 14:53:07.629821  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.629830  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:07.629834  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:07.629886  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:07.656081  124886 cri.go:89] found id: ""
	I1008 14:53:07.656096  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.656102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:07.656107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:07.656170  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:07.682162  124886 cri.go:89] found id: ""
	I1008 14:53:07.682177  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.682184  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:07.682189  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:07.682233  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:07.708617  124886 cri.go:89] found id: ""
	I1008 14:53:07.708635  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.708648  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:07.708653  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:07.708708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:07.734755  124886 cri.go:89] found id: ""
	I1008 14:53:07.734772  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.734782  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:07.734793  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:07.734807  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:07.794522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:07.794548  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:07.823563  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:07.823581  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.892786  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:07.892808  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:07.907262  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:07.907281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:07.962940  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.464656  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:10.476746  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:10.476800  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:10.502937  124886 cri.go:89] found id: ""
	I1008 14:53:10.502958  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.502968  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:10.502974  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:10.503025  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:10.529780  124886 cri.go:89] found id: ""
	I1008 14:53:10.529796  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.529803  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:10.529807  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:10.529856  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:10.556092  124886 cri.go:89] found id: ""
	I1008 14:53:10.556108  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.556117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:10.556124  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:10.556184  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:10.582264  124886 cri.go:89] found id: ""
	I1008 14:53:10.582281  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.582290  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:10.582296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:10.582354  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:10.608631  124886 cri.go:89] found id: ""
	I1008 14:53:10.608647  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.608655  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:10.608662  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:10.608721  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:10.635697  124886 cri.go:89] found id: ""
	I1008 14:53:10.635715  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.635725  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:10.635732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:10.635793  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:10.661998  124886 cri.go:89] found id: ""
	I1008 14:53:10.662018  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.662028  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:10.662040  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:10.662055  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:10.728096  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:10.728121  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:10.742521  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:10.742543  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:10.799551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.799566  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:10.799578  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:10.863614  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:10.863636  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.396084  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:13.407066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:13.407128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:13.433323  124886 cri.go:89] found id: ""
	I1008 14:53:13.433339  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.433345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:13.433350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:13.433393  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:13.460409  124886 cri.go:89] found id: ""
	I1008 14:53:13.460510  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.460522  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:13.460528  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:13.460589  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:13.487660  124886 cri.go:89] found id: ""
	I1008 14:53:13.487679  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.487689  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:13.487696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:13.487746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:13.515522  124886 cri.go:89] found id: ""
	I1008 14:53:13.515538  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.515546  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:13.515551  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:13.515595  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:13.540751  124886 cri.go:89] found id: ""
	I1008 14:53:13.540767  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.540773  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:13.540778  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:13.540846  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:13.566812  124886 cri.go:89] found id: ""
	I1008 14:53:13.566829  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.566837  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:13.566842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:13.566904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:13.593236  124886 cri.go:89] found id: ""
	I1008 14:53:13.593255  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.593262  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:13.593271  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:13.593281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:13.657627  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:13.657651  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.686303  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:13.686320  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:13.755568  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:13.755591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:13.769800  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:13.769819  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:13.826318  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:16.327013  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:16.337840  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:16.337908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:16.363203  124886 cri.go:89] found id: ""
	I1008 14:53:16.363221  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.363230  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:16.363235  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:16.363288  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:16.388535  124886 cri.go:89] found id: ""
	I1008 14:53:16.388551  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.388557  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:16.388563  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:16.388606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:16.414195  124886 cri.go:89] found id: ""
	I1008 14:53:16.414213  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.414221  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:16.414226  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:16.414274  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:16.440199  124886 cri.go:89] found id: ""
	I1008 14:53:16.440214  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.440221  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:16.440227  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:16.440283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:16.465899  124886 cri.go:89] found id: ""
	I1008 14:53:16.465918  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.465925  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:16.465931  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:16.465976  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:16.491135  124886 cri.go:89] found id: ""
	I1008 14:53:16.491151  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.491157  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:16.491162  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:16.491205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:16.517298  124886 cri.go:89] found id: ""
	I1008 14:53:16.517315  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.517323  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:16.517331  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:16.517342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:16.581777  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:16.581803  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:16.611824  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:16.611843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:16.679935  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:16.679957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:16.694087  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:16.694103  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:16.750382  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:19.252068  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:19.262927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:19.262980  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:19.288263  124886 cri.go:89] found id: ""
	I1008 14:53:19.288280  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.288286  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:19.288291  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:19.288334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:19.314749  124886 cri.go:89] found id: ""
	I1008 14:53:19.314769  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.314776  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:19.314781  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:19.314833  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:19.343105  124886 cri.go:89] found id: ""
	I1008 14:53:19.343124  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.343132  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:19.343137  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:19.343194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:19.369348  124886 cri.go:89] found id: ""
	I1008 14:53:19.369367  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.369376  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:19.369384  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:19.369438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:19.394541  124886 cri.go:89] found id: ""
	I1008 14:53:19.394556  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.394564  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:19.394569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:19.394617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:19.419883  124886 cri.go:89] found id: ""
	I1008 14:53:19.419900  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.419907  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:19.419911  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:19.419959  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:19.447316  124886 cri.go:89] found id: ""
	I1008 14:53:19.447332  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.447339  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:19.447347  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:19.447360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:19.509190  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:19.509213  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:19.538580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:19.538601  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:19.610379  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:19.610406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:19.625094  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:19.625115  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:19.682583  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:22.184381  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:22.195435  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:22.195496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:22.222530  124886 cri.go:89] found id: ""
	I1008 14:53:22.222549  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.222559  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:22.222565  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:22.222631  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:22.249103  124886 cri.go:89] found id: ""
	I1008 14:53:22.249118  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.249125  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:22.249130  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:22.249185  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:22.275859  124886 cri.go:89] found id: ""
	I1008 14:53:22.275877  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.275886  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:22.275891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:22.275944  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:22.301816  124886 cri.go:89] found id: ""
	I1008 14:53:22.301835  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.301845  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:22.301852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:22.301906  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:22.328795  124886 cri.go:89] found id: ""
	I1008 14:53:22.328810  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.328817  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:22.328821  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:22.328877  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:22.355119  124886 cri.go:89] found id: ""
	I1008 14:53:22.355134  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.355141  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:22.355146  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:22.355200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:22.382211  124886 cri.go:89] found id: ""
	I1008 14:53:22.382229  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.382238  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:22.382248  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:22.382262  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:22.442814  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:22.442840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:22.473721  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:22.473746  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:22.539788  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:22.539811  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:22.554277  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:22.554295  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:22.610102  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.110358  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:25.121359  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:25.121409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:25.146726  124886 cri.go:89] found id: ""
	I1008 14:53:25.146741  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.146747  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:25.146752  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:25.146797  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:25.173762  124886 cri.go:89] found id: ""
	I1008 14:53:25.173780  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.173788  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:25.173792  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:25.173839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:25.200613  124886 cri.go:89] found id: ""
	I1008 14:53:25.200630  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.200636  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:25.200641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:25.200686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:25.227307  124886 cri.go:89] found id: ""
	I1008 14:53:25.227327  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.227338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:25.227345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:25.227395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:25.253257  124886 cri.go:89] found id: ""
	I1008 14:53:25.253272  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.253278  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:25.253283  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:25.253329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:25.281060  124886 cri.go:89] found id: ""
	I1008 14:53:25.281077  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.281089  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:25.281094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:25.281140  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:25.306651  124886 cri.go:89] found id: ""
	I1008 14:53:25.306668  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.306678  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:25.306688  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:25.306699  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:25.373410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:25.373433  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:25.388282  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:25.388304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:25.445863  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.445874  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:25.445885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:25.510564  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:25.510590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.041417  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:28.052378  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:28.052432  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:28.078711  124886 cri.go:89] found id: ""
	I1008 14:53:28.078728  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.078734  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:28.078740  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:28.078782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:28.105010  124886 cri.go:89] found id: ""
	I1008 14:53:28.105025  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.105031  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:28.105036  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:28.105088  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:28.131983  124886 cri.go:89] found id: ""
	I1008 14:53:28.132001  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.132011  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:28.132017  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:28.132076  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:28.159135  124886 cri.go:89] found id: ""
	I1008 14:53:28.159153  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.159160  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:28.159166  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:28.159212  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:28.187793  124886 cri.go:89] found id: ""
	I1008 14:53:28.187811  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.187821  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:28.187827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:28.187872  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:28.214232  124886 cri.go:89] found id: ""
	I1008 14:53:28.214251  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.214265  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:28.214272  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:28.214335  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:28.240649  124886 cri.go:89] found id: ""
	I1008 14:53:28.240663  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.240669  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:28.240677  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:28.240687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:28.304071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:28.304094  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.333331  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:28.333346  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:28.401896  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:28.401919  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:28.416514  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:28.416531  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:28.472271  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:30.972553  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:30.983612  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:30.983666  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:31.011336  124886 cri.go:89] found id: ""
	I1008 14:53:31.011350  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.011357  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:31.011362  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:31.011405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:31.036913  124886 cri.go:89] found id: ""
	I1008 14:53:31.036935  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.036944  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:31.036948  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:31.037003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:31.063500  124886 cri.go:89] found id: ""
	I1008 14:53:31.063516  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.063523  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:31.063527  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:31.063582  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:31.091035  124886 cri.go:89] found id: ""
	I1008 14:53:31.091057  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.091066  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:31.091073  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:31.091123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:31.117295  124886 cri.go:89] found id: ""
	I1008 14:53:31.117310  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.117317  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:31.117322  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:31.117372  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:31.143795  124886 cri.go:89] found id: ""
	I1008 14:53:31.143810  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.143815  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:31.143820  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:31.143863  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:31.170134  124886 cri.go:89] found id: ""
	I1008 14:53:31.170150  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.170157  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:31.170164  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:31.170174  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:31.241300  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:31.241324  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:31.255637  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:31.255656  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:31.312716  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:31.312725  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:31.312736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:31.377091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:31.377114  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:33.907080  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:33.918207  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:33.918262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:33.944092  124886 cri.go:89] found id: ""
	I1008 14:53:33.944111  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.944122  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:33.944129  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:33.944192  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:33.970271  124886 cri.go:89] found id: ""
	I1008 14:53:33.970286  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.970293  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:33.970298  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:33.970347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:33.996407  124886 cri.go:89] found id: ""
	I1008 14:53:33.996421  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.996427  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:33.996433  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:33.996503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:34.023513  124886 cri.go:89] found id: ""
	I1008 14:53:34.023533  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.023542  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:34.023549  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:34.023606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:34.050777  124886 cri.go:89] found id: ""
	I1008 14:53:34.050797  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.050807  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:34.050813  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:34.050868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:34.077691  124886 cri.go:89] found id: ""
	I1008 14:53:34.077710  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.077719  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:34.077724  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:34.077769  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:34.104354  124886 cri.go:89] found id: ""
	I1008 14:53:34.104373  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.104380  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:34.104388  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:34.104404  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:34.171873  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:34.171899  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:34.185891  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:34.185908  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:34.243162  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:34.243172  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:34.243185  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:34.306766  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:34.306791  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:36.836905  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:36.848013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:36.848068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:36.873912  124886 cri.go:89] found id: ""
	I1008 14:53:36.873930  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.873938  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:36.873944  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:36.873994  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:36.899859  124886 cri.go:89] found id: ""
	I1008 14:53:36.899875  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.899881  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:36.899886  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:36.899930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:36.926292  124886 cri.go:89] found id: ""
	I1008 14:53:36.926314  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.926321  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:36.926326  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:36.926370  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:36.952172  124886 cri.go:89] found id: ""
	I1008 14:53:36.952189  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.952196  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:36.952201  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:36.952248  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:36.978525  124886 cri.go:89] found id: ""
	I1008 14:53:36.978542  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.978548  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:36.978553  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:36.978605  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:37.005955  124886 cri.go:89] found id: ""
	I1008 14:53:37.005973  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.005984  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:37.005990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:37.006037  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:37.032282  124886 cri.go:89] found id: ""
	I1008 14:53:37.032300  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.032310  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:37.032320  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:37.032336  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:37.100471  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:37.100507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:37.114707  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:37.114727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:37.173117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:37.173128  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:37.173138  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:37.237613  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:37.237637  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:39.769167  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:39.780181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:39.780239  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:39.805900  124886 cri.go:89] found id: ""
	I1008 14:53:39.805921  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.805928  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:39.805935  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:39.805982  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:39.832463  124886 cri.go:89] found id: ""
	I1008 14:53:39.832485  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.832493  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:39.832501  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:39.832565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:39.859105  124886 cri.go:89] found id: ""
	I1008 14:53:39.859120  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.859127  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:39.859132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:39.859176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:39.885372  124886 cri.go:89] found id: ""
	I1008 14:53:39.885395  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.885402  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:39.885410  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:39.885476  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:39.911669  124886 cri.go:89] found id: ""
	I1008 14:53:39.911684  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.911691  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:39.911696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:39.911743  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:39.939236  124886 cri.go:89] found id: ""
	I1008 14:53:39.939254  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.939263  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:39.939269  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:39.939329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:39.967816  124886 cri.go:89] found id: ""
	I1008 14:53:39.967833  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.967839  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:39.967847  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:39.967859  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:39.982071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:39.982090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:40.038524  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:40.038545  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:40.038560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:40.099347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:40.099369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:40.128637  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:40.128654  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.700345  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:42.711170  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:42.711224  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:42.738404  124886 cri.go:89] found id: ""
	I1008 14:53:42.738420  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.738426  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:42.738431  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:42.738496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:42.765170  124886 cri.go:89] found id: ""
	I1008 14:53:42.765185  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.765192  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:42.765196  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:42.765244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:42.790844  124886 cri.go:89] found id: ""
	I1008 14:53:42.790862  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.790870  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:42.790876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:42.790920  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:42.817749  124886 cri.go:89] found id: ""
	I1008 14:53:42.817765  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.817772  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:42.817777  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:42.817826  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:42.844796  124886 cri.go:89] found id: ""
	I1008 14:53:42.844815  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.844823  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:42.844827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:42.844882  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:42.870976  124886 cri.go:89] found id: ""
	I1008 14:53:42.870993  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.871001  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:42.871006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:42.871051  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:42.897679  124886 cri.go:89] found id: ""
	I1008 14:53:42.897698  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.897707  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:42.897716  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:42.897727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.967720  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:42.967744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:42.981967  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:42.981984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:43.039728  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:43.039742  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:43.039753  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:43.101886  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:43.101911  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:45.635598  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:45.646564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:45.646617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:45.673775  124886 cri.go:89] found id: ""
	I1008 14:53:45.673791  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.673797  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:45.673802  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:45.673845  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:45.700610  124886 cri.go:89] found id: ""
	I1008 14:53:45.700627  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.700633  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:45.700638  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:45.700694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:45.726636  124886 cri.go:89] found id: ""
	I1008 14:53:45.726653  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.726662  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:45.726669  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:45.726723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:45.753352  124886 cri.go:89] found id: ""
	I1008 14:53:45.753367  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.753374  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:45.753379  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:45.753434  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:45.780250  124886 cri.go:89] found id: ""
	I1008 14:53:45.780266  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.780272  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:45.780277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:45.780326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:45.805847  124886 cri.go:89] found id: ""
	I1008 14:53:45.805863  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.805870  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:45.805875  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:45.805940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:45.832274  124886 cri.go:89] found id: ""
	I1008 14:53:45.832290  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.832297  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:45.832304  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:45.832315  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:45.901895  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:45.901925  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:45.916420  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:45.916438  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:45.972937  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:45.972948  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:45.972958  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:46.034817  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:46.034841  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.564993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:48.576052  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:48.576102  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:48.602007  124886 cri.go:89] found id: ""
	I1008 14:53:48.602024  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.602031  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:48.602035  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:48.602080  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:48.628143  124886 cri.go:89] found id: ""
	I1008 14:53:48.628160  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.628168  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:48.628173  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:48.628218  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:48.655880  124886 cri.go:89] found id: ""
	I1008 14:53:48.655898  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.655907  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:48.655913  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:48.655958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:48.683255  124886 cri.go:89] found id: ""
	I1008 14:53:48.683270  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.683278  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:48.683284  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:48.683337  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:48.709473  124886 cri.go:89] found id: ""
	I1008 14:53:48.709492  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.709501  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:48.709508  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:48.709567  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:48.736246  124886 cri.go:89] found id: ""
	I1008 14:53:48.736268  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.736274  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:48.736279  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:48.736327  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:48.763463  124886 cri.go:89] found id: ""
	I1008 14:53:48.763483  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.763493  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:48.763503  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:48.763518  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.792359  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:48.792378  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:48.859056  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:48.859077  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:48.873385  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:48.873405  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:48.931065  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:48.931075  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:48.931087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:51.494941  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:51.505819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:51.505869  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:51.533622  124886 cri.go:89] found id: ""
	I1008 14:53:51.533643  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.533652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:51.533659  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:51.533707  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:51.560499  124886 cri.go:89] found id: ""
	I1008 14:53:51.560519  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.560528  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:51.560536  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:51.560584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:51.587541  124886 cri.go:89] found id: ""
	I1008 14:53:51.587556  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.587564  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:51.587569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:51.587616  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:51.614266  124886 cri.go:89] found id: ""
	I1008 14:53:51.614284  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.614291  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:51.614296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:51.614343  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:51.639614  124886 cri.go:89] found id: ""
	I1008 14:53:51.639632  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.639641  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:51.639649  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:51.639708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:51.667306  124886 cri.go:89] found id: ""
	I1008 14:53:51.667322  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.667328  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:51.667333  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:51.667375  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:51.692160  124886 cri.go:89] found id: ""
	I1008 14:53:51.692175  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.692182  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:51.692191  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:51.692204  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:51.720341  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:51.720358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:51.785600  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:51.785622  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:51.800298  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:51.800317  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:51.857283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:51.857293  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:51.857304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:54.424673  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:54.435975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:54.436023  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:54.462429  124886 cri.go:89] found id: ""
	I1008 14:53:54.462462  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.462472  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:54.462479  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:54.462528  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:54.489261  124886 cri.go:89] found id: ""
	I1008 14:53:54.489276  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.489284  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:54.489289  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:54.489344  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:54.514962  124886 cri.go:89] found id: ""
	I1008 14:53:54.514980  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.514990  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:54.514996  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:54.515040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:54.541414  124886 cri.go:89] found id: ""
	I1008 14:53:54.541428  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.541435  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:54.541439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:54.541501  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:54.567913  124886 cri.go:89] found id: ""
	I1008 14:53:54.567931  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.567940  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:54.567945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:54.568008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:54.594492  124886 cri.go:89] found id: ""
	I1008 14:53:54.594511  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.594522  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:54.594528  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:54.594583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:54.621305  124886 cri.go:89] found id: ""
	I1008 14:53:54.621321  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.621330  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:54.621338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:54.621348  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:54.648627  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:54.648645  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:54.717360  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:54.717382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:54.731905  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:54.731923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:54.788630  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:54.788640  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:54.788650  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.353718  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:57.365518  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:57.365570  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:57.391621  124886 cri.go:89] found id: ""
	I1008 14:53:57.391638  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.391646  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:57.391650  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:57.391704  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:57.419557  124886 cri.go:89] found id: ""
	I1008 14:53:57.419574  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.419582  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:57.419587  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:57.419643  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:57.447029  124886 cri.go:89] found id: ""
	I1008 14:53:57.447047  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.447059  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:57.447077  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:57.447126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:57.473391  124886 cri.go:89] found id: ""
	I1008 14:53:57.473410  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.473418  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:57.473423  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:57.473494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:57.499437  124886 cri.go:89] found id: ""
	I1008 14:53:57.499472  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.499481  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:57.499486  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:57.499542  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:57.525753  124886 cri.go:89] found id: ""
	I1008 14:53:57.525770  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.525776  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:57.525782  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:57.525827  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:57.555506  124886 cri.go:89] found id: ""
	I1008 14:53:57.555523  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.555529  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:57.555539  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:57.555553  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:57.623045  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:57.623068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:57.637620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:57.637638  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:57.695326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:57.695339  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:57.695356  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.755685  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:57.755710  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:00.285648  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:00.296554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:00.296603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:00.322379  124886 cri.go:89] found id: ""
	I1008 14:54:00.322396  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.322405  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:00.322409  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:00.322474  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:00.349397  124886 cri.go:89] found id: ""
	I1008 14:54:00.349414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.349423  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:00.349429  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:00.349507  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:00.375588  124886 cri.go:89] found id: ""
	I1008 14:54:00.375602  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.375608  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:00.375613  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:00.375670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:00.401398  124886 cri.go:89] found id: ""
	I1008 14:54:00.401414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.401420  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:00.401426  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:00.401494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:00.427652  124886 cri.go:89] found id: ""
	I1008 14:54:00.427668  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.427675  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:00.427680  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:00.427736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:00.451896  124886 cri.go:89] found id: ""
	I1008 14:54:00.451911  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.451918  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:00.451923  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:00.451967  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:00.478107  124886 cri.go:89] found id: ""
	I1008 14:54:00.478122  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.478128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:00.478135  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:00.478145  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:00.547950  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:00.547974  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:00.561968  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:00.561986  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:00.618117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:00.618131  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:00.618141  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:00.683464  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:00.683490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.211808  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:03.222618  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:03.222667  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:03.248716  124886 cri.go:89] found id: ""
	I1008 14:54:03.248732  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.248738  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:03.248742  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:03.248784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:03.275183  124886 cri.go:89] found id: ""
	I1008 14:54:03.275202  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.275209  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:03.275214  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:03.275262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:03.301882  124886 cri.go:89] found id: ""
	I1008 14:54:03.301909  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.301915  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:03.301920  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:03.301966  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:03.328783  124886 cri.go:89] found id: ""
	I1008 14:54:03.328799  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.328811  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:03.328817  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:03.328864  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:03.355235  124886 cri.go:89] found id: ""
	I1008 14:54:03.355251  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.355259  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:03.355268  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:03.355313  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:03.382286  124886 cri.go:89] found id: ""
	I1008 14:54:03.382305  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.382313  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:03.382318  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:03.382371  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:03.408682  124886 cri.go:89] found id: ""
	I1008 14:54:03.408700  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.408708  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:03.408718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:03.408732  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.438177  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:03.438196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:03.507859  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:03.507881  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:03.523723  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:03.523747  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:03.580407  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:03.580418  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:03.580430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.142863  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:06.153852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:06.153912  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:06.180234  124886 cri.go:89] found id: ""
	I1008 14:54:06.180253  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.180264  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:06.180271  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:06.180320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:06.207080  124886 cri.go:89] found id: ""
	I1008 14:54:06.207094  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.207101  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:06.207106  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:06.207152  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:06.232369  124886 cri.go:89] found id: ""
	I1008 14:54:06.232384  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.232390  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:06.232394  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:06.232438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:06.257360  124886 cri.go:89] found id: ""
	I1008 14:54:06.257376  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.257383  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:06.257388  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:06.257433  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:06.284487  124886 cri.go:89] found id: ""
	I1008 14:54:06.284507  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.284516  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:06.284523  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:06.284584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:06.310846  124886 cri.go:89] found id: ""
	I1008 14:54:06.310863  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.310874  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:06.310882  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:06.310935  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:06.337095  124886 cri.go:89] found id: ""
	I1008 14:54:06.337114  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.337121  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:06.337130  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:06.337142  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:06.406561  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:06.406591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:06.421066  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:06.421088  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:06.477926  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:06.477943  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:06.477957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.538516  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:06.538537  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:09.071758  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:09.082621  124886 kubeadm.go:601] duration metric: took 4m3.01446136s to restartPrimaryControlPlane
	W1008 14:54:09.082718  124886 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 14:54:09.082774  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:54:09.534098  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:54:09.546894  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:54:09.555065  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:54:09.555116  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:54:09.563122  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:54:09.563134  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:54:09.563181  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:54:09.571418  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:54:09.571492  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:54:09.579061  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:54:09.587199  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:54:09.587244  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:54:09.594420  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.602223  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:54:09.602263  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.609598  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:54:09.616978  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:54:09.617035  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:54:09.624225  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:54:09.679083  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:54:09.736432  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:58:12.118648  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:58:12.118737  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:58:12.121564  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:58:12.121611  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:58:12.121691  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:58:12.121739  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:58:12.121768  124886 kubeadm.go:318] OS: Linux
	I1008 14:58:12.121805  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:58:12.121846  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:58:12.121885  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:58:12.121936  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:58:12.121975  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:58:12.122056  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:58:12.122130  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:58:12.122194  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:58:12.122280  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:58:12.122381  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:58:12.122523  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:58:12.122608  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:58:12.124721  124886 out.go:252]   - Generating certificates and keys ...
	I1008 14:58:12.124815  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:58:12.124880  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:58:12.124964  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:58:12.125031  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:58:12.125148  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:58:12.125193  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:58:12.125282  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:58:12.125362  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:58:12.125490  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:58:12.125594  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:58:12.125626  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:58:12.125673  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:58:12.125714  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:58:12.125760  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:58:12.125802  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:58:12.125857  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:58:12.125902  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:58:12.125971  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:58:12.126032  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:58:12.128152  124886 out.go:252]   - Booting up control plane ...
	I1008 14:58:12.128237  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:58:12.128300  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:58:12.128371  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:58:12.128508  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:58:12.128583  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:58:12.128689  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:58:12.128762  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:58:12.128794  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:58:12.128904  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:58:12.128993  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:58:12.129038  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.0016053s
	I1008 14:58:12.129115  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:58:12.129187  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:58:12.129304  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:58:12.129408  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:58:12.129490  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	I1008 14:58:12.129546  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	I1008 14:58:12.129607  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	I1008 14:58:12.129609  124886 kubeadm.go:318] 
	I1008 14:58:12.129696  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:58:12.129765  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:58:12.129857  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:58:12.129935  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:58:12.129999  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:58:12.130073  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:58:12.130125  124886 kubeadm.go:318] 
	W1008 14:58:12.130230  124886 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.0016053s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:58:12.130328  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:58:12.582965  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:58:12.596265  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:58:12.596310  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:58:12.604829  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:58:12.604840  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:58:12.604880  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:58:12.613146  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:58:12.613253  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:58:12.621163  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:58:12.629390  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:58:12.629433  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:58:12.637274  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.645831  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:58:12.645886  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.653972  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:58:12.662348  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:58:12.662392  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:58:12.670230  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:58:12.730328  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:58:12.789898  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:02:14.463875  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 15:02:14.464082  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:02:14.466966  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:02:14.467026  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:02:14.467112  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:02:14.467156  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:02:14.467184  124886 kubeadm.go:318] OS: Linux
	I1008 15:02:14.467232  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:02:14.467270  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:02:14.467309  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:02:14.467348  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:02:14.467386  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:02:14.467424  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:02:14.467494  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:02:14.467536  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:02:14.467596  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:02:14.467693  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:02:14.467779  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:02:14.467827  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:02:14.470599  124886 out.go:252]   - Generating certificates and keys ...
	I1008 15:02:14.470674  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:02:14.470757  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:02:14.470867  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:02:14.470954  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:02:14.471017  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:02:14.471091  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:02:14.471148  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:02:14.471198  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:02:14.471289  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:02:14.471353  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:02:14.471382  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:02:14.471424  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:02:14.471487  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:02:14.471529  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:02:14.471569  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:02:14.471615  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:02:14.471657  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:02:14.471734  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:02:14.471802  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:02:14.473075  124886 out.go:252]   - Booting up control plane ...
	I1008 15:02:14.473133  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:02:14.473209  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:02:14.473257  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:02:14.473356  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:02:14.473436  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:02:14.473538  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:02:14.473606  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:02:14.473637  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:02:14.473747  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:02:14.473833  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:02:14.473877  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.93866ms
	I1008 15:02:14.473950  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:02:14.474013  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 15:02:14.474094  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:02:14.474159  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:02:14.474228  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	I1008 15:02:14.474292  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	I1008 15:02:14.474371  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	I1008 15:02:14.474380  124886 kubeadm.go:318] 
	I1008 15:02:14.474476  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:02:14.474542  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:02:14.474617  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:02:14.474713  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:02:14.474773  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:02:14.474854  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:02:14.474900  124886 kubeadm.go:318] 
	I1008 15:02:14.474937  124886 kubeadm.go:402] duration metric: took 12m8.444330692s to StartCluster
	I1008 15:02:14.474986  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:02:14.475048  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:02:14.503050  124886 cri.go:89] found id: ""
	I1008 15:02:14.503067  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.503076  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:02:14.503082  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:02:14.503136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:02:14.530120  124886 cri.go:89] found id: ""
	I1008 15:02:14.530138  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.530145  124886 logs.go:284] No container was found matching "etcd"
	I1008 15:02:14.530149  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:02:14.530200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:02:14.555892  124886 cri.go:89] found id: ""
	I1008 15:02:14.555909  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.555916  124886 logs.go:284] No container was found matching "coredns"
	I1008 15:02:14.555921  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:02:14.555972  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:02:14.583336  124886 cri.go:89] found id: ""
	I1008 15:02:14.583351  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.583358  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:02:14.583363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:02:14.583409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:02:14.611139  124886 cri.go:89] found id: ""
	I1008 15:02:14.611160  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.611169  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:02:14.611175  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:02:14.611227  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:02:14.639405  124886 cri.go:89] found id: ""
	I1008 15:02:14.639422  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.639429  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:02:14.639434  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:02:14.639496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:02:14.666049  124886 cri.go:89] found id: ""
	I1008 15:02:14.666066  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.666073  124886 logs.go:284] No container was found matching "kindnet"
	I1008 15:02:14.666082  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:02:14.666093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:02:14.729847  124886 logs.go:123] Gathering logs for container status ...
	I1008 15:02:14.729877  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 15:02:14.760743  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 15:02:14.760761  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:02:14.827532  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 15:02:14.827555  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:02:14.842256  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:02:14.842273  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:02:14.900360  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1008 15:02:14.900380  124886 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:02:14.900418  124886 out.go:285] * 
	W1008 15:02:14.900560  124886 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.900582  124886 out.go:285] * 
	W1008 15:02:14.902936  124886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:02:14.906609  124886 out.go:203] 
	W1008 15:02:14.908139  124886 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.908172  124886 out.go:285] * 
	I1008 15:02:14.910356  124886 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:11 functional-367186 crio[5841]: time="2025-10-08T15:02:11.236147607Z" level=info msg="createCtr: removing container 8f90e981d591b1813723dfa77b79e967f03eead8d5e3a0d2b53230766b677389" id=442b87ca-4162-43c4-a6a7-06ee1e1feaf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:11 functional-367186 crio[5841]: time="2025-10-08T15:02:11.236182628Z" level=info msg="createCtr: deleting container 8f90e981d591b1813723dfa77b79e967f03eead8d5e3a0d2b53230766b677389 from storage" id=442b87ca-4162-43c4-a6a7-06ee1e1feaf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:11 functional-367186 crio[5841]: time="2025-10-08T15:02:11.238322647Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-367186_kube-system_c9f63674abedb97e40dbf72720752d59_0" id=442b87ca-4162-43c4-a6a7-06ee1e1feaf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.21213297Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e58a632c-ac54-43a6-a140-845f4ef163fe name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.214269396Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5ad01698-37e9-4323-80f8-3474caec0a68 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.215179195Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-367186/kube-scheduler" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.215432207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.218783034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.219253823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.234458319Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.235918816Z" level=info msg="createCtr: deleting container ID 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2 from idIndex" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.235970167Z" level=info msg="createCtr: removing container 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.236014435Z" level=info msg="createCtr: deleting container 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2 from storage" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.238146031Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-367186_kube-system_72fbb4fed11a83b82d196f480544c561_0" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.213078537Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=54863201-7b39-4ed4-ab14-0d41c1a7c865 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.21401263Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=aea1d193-b8b9-4b9f-b6bb-340acce60e77 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.214965671Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.215222603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.218562955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.218978786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.240788352Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242470926Z" level=info msg="createCtr: deleting container ID 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee from idIndex" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242521147Z" level=info msg="createCtr: removing container 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242570796Z" level=info msg="createCtr: deleting container 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee from storage" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.244732312Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:16.072531   15736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:16.073197   15736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:16.074829   15736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:16.075318   15736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:16.076868   15736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:16 up  2:44,  0 user,  load average: 0.00, 0.03, 0.22
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:11 functional-367186 kubelet[14967]: E1008 15:02:11.238675   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:11 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:11 functional-367186 kubelet[14967]:  > podSandboxID="103af37cbf4c9221b295ec70e9d3c9c67c8cbc7d0f6d428cb18ada4b23a2bd33"
	Oct 08 15:02:11 functional-367186 kubelet[14967]: E1008 15:02:11.238796   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:11 functional-367186 kubelet[14967]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c9f63674abedb97e40dbf72720752d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:11 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:11 functional-367186 kubelet[14967]: E1008 15:02:11.238833   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c9f63674abedb97e40dbf72720752d59"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.211693   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238496   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:12 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:12 functional-367186 kubelet[14967]:  > podSandboxID="e484b96b426485f7bb73491a3eadb180f53489ac5744f9f22e7d4f5f26a4a47a"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238592   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:12 functional-367186 kubelet[14967]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:12 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238621   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.212513   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245058   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:13 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:13 functional-367186 kubelet[14967]:  > podSandboxID="49d755d590c1e6c75fffb26df4018ef3af1ece9b6aef63dbe754f59f467146f3"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245169   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:13 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:13 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245209   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:14 functional-367186 kubelet[14967]: E1008 15:02:14.233845   14967 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 15:02:16 functional-367186 kubelet[14967]: E1008 15:02:16.045402   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (304.928656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (734.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-367186 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-367186 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (54.103721ms)

                                                
                                                
** stderr ** 
	E1008 15:02:16.841095  138141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:16.841506  138141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:16.842942  138141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:16.843217  138141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:16.844710  138141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-367186 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (296.024993ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ unpause │ nospam-526605 --log_dir /tmp/nospam-526605 unpause                                                            │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ stop    │ nospam-526605 --log_dir /tmp/nospam-526605 stop                                                               │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ delete  │ -p nospam-526605                                                                                              │ nospam-526605     │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │ 08 Oct 25 14:35 UTC │
	│ start   │ -p functional-367186 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:35 UTC │                     │
	│ start   │ -p functional-367186 --alsologtostderr -v=8                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:43 UTC │                     │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.1                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:3.3                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add registry.k8s.io/pause:latest                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache add minikube-local-cache-test:functional-367186                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ functional-367186 cache delete minikube-local-cache-test:functional-367186                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl images                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ cache   │ functional-367186 cache reload                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ kubectl │ functional-367186 kubectl -- --context functional-367186 get pods                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ start   │ -p functional-367186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:50:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:50:02.487614  124886 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:50:02.487885  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.487890  124886 out.go:374] Setting ErrFile to fd 2...
	I1008 14:50:02.487894  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.488148  124886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:50:02.488703  124886 out.go:368] Setting JSON to false
	I1008 14:50:02.489732  124886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9153,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:50:02.489824  124886 start.go:141] virtualization: kvm guest
	I1008 14:50:02.491855  124886 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:50:02.493271  124886 notify.go:220] Checking for updates...
	I1008 14:50:02.493279  124886 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:50:02.494598  124886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:50:02.495836  124886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:50:02.497242  124886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:50:02.498624  124886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:50:02.499973  124886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:50:02.501897  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:02.502018  124886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:50:02.525193  124886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:50:02.525315  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.584022  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.573926988 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.584110  124886 docker.go:318] overlay module found
	I1008 14:50:02.585968  124886 out.go:179] * Using the docker driver based on existing profile
	I1008 14:50:02.587279  124886 start.go:305] selected driver: docker
	I1008 14:50:02.587288  124886 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.587409  124886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:50:02.587529  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.641632  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.631975419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.642294  124886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:50:02.642317  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:02.642374  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:02.642409  124886 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.644427  124886 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:50:02.645877  124886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:50:02.647092  124886 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:50:02.648224  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:02.648254  124886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:50:02.648262  124886 cache.go:58] Caching tarball of preloaded images
	I1008 14:50:02.648344  124886 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:50:02.648340  124886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:50:02.648350  124886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:50:02.648438  124886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:50:02.667989  124886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:50:02.668000  124886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:50:02.668014  124886 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:50:02.668041  124886 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:50:02.668096  124886 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "functional-367186"
	I1008 14:50:02.668109  124886 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:50:02.668113  124886 fix.go:54] fixHost starting: 
	I1008 14:50:02.668337  124886 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:50:02.684543  124886 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:50:02.684562  124886 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:50:02.686414  124886 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:50:02.686441  124886 machine.go:93] provisionDockerMachine start ...
	I1008 14:50:02.686552  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.704251  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.704482  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.704488  124886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:50:02.850612  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:02.850631  124886 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:50:02.850683  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.868208  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.868417  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.868424  124886 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:50:03.024186  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:03.024255  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.041071  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.041277  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.041288  124886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:50:03.186253  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:50:03.186270  124886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:50:03.186287  124886 ubuntu.go:190] setting up certificates
	I1008 14:50:03.186296  124886 provision.go:84] configureAuth start
	I1008 14:50:03.186366  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:03.203498  124886 provision.go:143] copyHostCerts
	I1008 14:50:03.203554  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:50:03.203567  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:50:03.203633  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:50:03.203728  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:50:03.203738  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:50:03.203764  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:50:03.203811  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:50:03.203815  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:50:03.203835  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:50:03.203891  124886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:50:03.342698  124886 provision.go:177] copyRemoteCerts
	I1008 14:50:03.342747  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:50:03.342789  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.359931  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.462754  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:50:03.480100  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:50:03.497218  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:50:03.514367  124886 provision.go:87] duration metric: took 328.059175ms to configureAuth
	I1008 14:50:03.514387  124886 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:50:03.514597  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:03.514714  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.531920  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.532136  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.532149  124886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:50:03.804333  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:50:03.804348  124886 machine.go:96] duration metric: took 1.117888769s to provisionDockerMachine
	I1008 14:50:03.804358  124886 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:50:03.804366  124886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:50:03.804425  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:50:03.804490  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.822222  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.925021  124886 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:50:03.928570  124886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:50:03.928586  124886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:50:03.928595  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:50:03.928648  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:50:03.928714  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:50:03.928776  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:50:03.928851  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:50:03.936383  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:03.953682  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:50:03.970665  124886 start.go:296] duration metric: took 166.291312ms for postStartSetup
	I1008 14:50:03.970729  124886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:03.970760  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.987625  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.086669  124886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:50:04.091298  124886 fix.go:56] duration metric: took 1.423178254s for fixHost
	I1008 14:50:04.091311  124886 start.go:83] releasing machines lock for "functional-367186", held for 1.423209484s
	I1008 14:50:04.091360  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:04.107787  124886 ssh_runner.go:195] Run: cat /version.json
	I1008 14:50:04.107823  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.107871  124886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:50:04.107944  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.125505  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.126027  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.277012  124886 ssh_runner.go:195] Run: systemctl --version
	I1008 14:50:04.283607  124886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:50:04.317281  124886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:50:04.322127  124886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:50:04.322186  124886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:50:04.329933  124886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:50:04.329948  124886 start.go:495] detecting cgroup driver to use...
	I1008 14:50:04.329985  124886 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:50:04.330037  124886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:50:04.344088  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:50:04.355897  124886 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:50:04.355934  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:50:04.370666  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:50:04.383061  124886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:50:04.469185  124886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:50:04.555865  124886 docker.go:234] disabling docker service ...
	I1008 14:50:04.555933  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:50:04.571649  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:50:04.585004  124886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:50:04.673830  124886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:50:04.762936  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:50:04.775689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:50:04.790127  124886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:50:04.790172  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.799414  124886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:50:04.799484  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.808366  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.816703  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.825175  124886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:50:04.833160  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.842121  124886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.850355  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.859028  124886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:50:04.866049  124886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:50:04.873109  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:04.955543  124886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:50:05.069798  124886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:50:05.069856  124886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:50:05.074109  124886 start.go:563] Will wait 60s for crictl version
	I1008 14:50:05.074171  124886 ssh_runner.go:195] Run: which crictl
	I1008 14:50:05.077741  124886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:50:05.103519  124886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:50:05.103581  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.131061  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.160549  124886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:50:05.161770  124886 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:50:05.178428  124886 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:50:05.184282  124886 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1008 14:50:05.185372  124886 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:50:05.185532  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:05.185581  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.219145  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.219157  124886 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:50:05.219203  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.244747  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.244760  124886 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:50:05.244766  124886 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:50:05.244868  124886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:50:05.244932  124886 ssh_runner.go:195] Run: crio config
	I1008 14:50:05.290552  124886 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1008 14:50:05.290627  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:05.290634  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:05.290643  124886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:50:05.290661  124886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:50:05.290774  124886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:50:05.290829  124886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:50:05.299112  124886 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:50:05.299181  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:50:05.307519  124886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:50:05.319796  124886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:50:05.331988  124886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1008 14:50:05.344225  124886 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:50:05.347910  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:05.434760  124886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:50:05.447481  124886 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:50:05.447496  124886 certs.go:195] generating shared ca certs ...
	I1008 14:50:05.447517  124886 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:50:05.447665  124886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:50:05.447699  124886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:50:05.447705  124886 certs.go:257] generating profile certs ...
	I1008 14:50:05.447783  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:50:05.447822  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:50:05.447852  124886 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:50:05.447956  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:50:05.447979  124886 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:50:05.447984  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:50:05.448004  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:50:05.448022  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:50:05.448039  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:50:05.448072  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:05.448723  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:50:05.466280  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:50:05.482753  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:50:05.499451  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:50:05.516010  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:50:05.532903  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:50:05.549460  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:50:05.566552  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:50:05.584248  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:50:05.601250  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:50:05.618600  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:50:05.636280  124886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:50:05.648959  124886 ssh_runner.go:195] Run: openssl version
	I1008 14:50:05.655372  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:50:05.664552  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668508  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668554  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.702319  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:50:05.710597  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:50:05.719238  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722899  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722944  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.756814  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:50:05.765232  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:50:05.773915  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777582  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777627  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.811974  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:50:05.820369  124886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:50:05.824309  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:50:05.858210  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:50:05.892122  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:50:05.926997  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:50:05.961508  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:50:05.996031  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:50:06.030615  124886 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:06.030703  124886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:50:06.030782  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.059591  124886 cri.go:89] found id: ""
	I1008 14:50:06.059641  124886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:50:06.068127  124886 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:50:06.068151  124886 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:50:06.068205  124886 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:50:06.076226  124886 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.076725  124886 kubeconfig.go:125] found "functional-367186" server: "https://192.168.49.2:8441"
	I1008 14:50:06.077896  124886 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:50:06.086029  124886 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-08 14:35:34.873718023 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-08 14:50:05.341579042 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1008 14:50:06.086044  124886 kubeadm.go:1160] stopping kube-system containers ...
	I1008 14:50:06.086056  124886 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 14:50:06.086094  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.113178  124886 cri.go:89] found id: ""
	I1008 14:50:06.113245  124886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 14:50:06.155234  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:50:06.163592  124886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  8 14:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  8 14:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Oct  8 14:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  8 14:39 /etc/kubernetes/scheduler.conf
	
	I1008 14:50:06.163642  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:50:06.171483  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:50:06.179293  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.179397  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:50:06.186779  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.194154  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.194203  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.201651  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:50:06.209487  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.209530  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:50:06.217108  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:50:06.224828  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:06.265674  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.277477  124886 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011762147s)
	I1008 14:50:07.277533  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.443820  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.494457  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.547380  124886 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:50:07.547460  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.047610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.547636  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.047603  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.548254  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.047862  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.548513  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.048225  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.548074  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.048566  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.548179  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.047805  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.548258  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.048373  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.047544  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.548496  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.048492  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.548115  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.548277  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.047671  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.048049  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.547809  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.047855  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.547915  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.048015  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.547746  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.048353  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.548289  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.048071  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.547643  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.047912  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.548519  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.047801  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.547748  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.048322  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.548153  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.047657  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.547721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.047652  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.047871  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.548380  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.047959  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.548581  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.047957  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.547650  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.048117  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.547561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.048296  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.547881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.047870  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.548272  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.548487  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.047562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.547999  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.048398  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.547939  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.048434  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.547918  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.048433  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.548054  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.048329  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.548100  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.047697  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.548386  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.047561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.548546  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.048286  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.547793  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.048077  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.547717  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.048220  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.548251  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.047634  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.548172  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.048591  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.548428  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.048515  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.547901  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.048572  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.548237  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.047859  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.548570  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.047742  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.548274  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.047802  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.548510  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.047998  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.547560  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.047723  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.547955  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.048562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.547549  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.047984  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.547945  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.048426  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.547582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.048058  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.548196  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.048582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.548046  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.047563  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.047699  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.547610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.048374  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.548211  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.048533  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.548306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:07.548386  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:07.574942  124886 cri.go:89] found id: ""
	I1008 14:51:07.574974  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.574982  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:07.574988  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:07.575052  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:07.600942  124886 cri.go:89] found id: ""
	I1008 14:51:07.600957  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.600964  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:07.600968  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:07.601020  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:07.627307  124886 cri.go:89] found id: ""
	I1008 14:51:07.627324  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.627331  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:07.627336  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:07.627388  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:07.653908  124886 cri.go:89] found id: ""
	I1008 14:51:07.653925  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.653933  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:07.653938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:07.653988  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:07.681787  124886 cri.go:89] found id: ""
	I1008 14:51:07.681806  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.681814  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:07.681818  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:07.681881  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:07.707870  124886 cri.go:89] found id: ""
	I1008 14:51:07.707886  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.707892  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:07.707898  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:07.707955  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:07.734640  124886 cri.go:89] found id: ""
	I1008 14:51:07.734655  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.734662  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:07.734673  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:07.734682  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:07.804699  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:07.804721  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:07.819273  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:07.819290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:07.875686  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:07.875696  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:07.875709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:07.940091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:07.940122  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:10.470645  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:10.481694  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:10.481739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:10.506817  124886 cri.go:89] found id: ""
	I1008 14:51:10.506832  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.506839  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:10.506843  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:10.506898  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:10.531484  124886 cri.go:89] found id: ""
	I1008 14:51:10.531499  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.531506  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:10.531511  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:10.531558  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:10.557249  124886 cri.go:89] found id: ""
	I1008 14:51:10.557268  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.557277  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:10.557282  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:10.557333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:10.582779  124886 cri.go:89] found id: ""
	I1008 14:51:10.582797  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.582833  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:10.582838  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:10.582908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:10.608584  124886 cri.go:89] found id: ""
	I1008 14:51:10.608599  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.608606  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:10.608610  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:10.608653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:10.634540  124886 cri.go:89] found id: ""
	I1008 14:51:10.634557  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.634567  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:10.634573  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:10.634635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:10.659510  124886 cri.go:89] found id: ""
	I1008 14:51:10.659526  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.659532  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:10.659541  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:10.659552  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:10.727322  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:10.727344  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:10.741862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:10.741882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:10.798339  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:10.798350  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:10.798362  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:10.862340  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:10.862363  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.392975  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:13.404098  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:13.404165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:13.430215  124886 cri.go:89] found id: ""
	I1008 14:51:13.430231  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.430237  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:13.430242  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:13.430283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:13.455821  124886 cri.go:89] found id: ""
	I1008 14:51:13.455837  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.455844  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:13.455853  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:13.455903  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:13.482279  124886 cri.go:89] found id: ""
	I1008 14:51:13.482296  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.482316  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:13.482321  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:13.482366  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:13.508868  124886 cri.go:89] found id: ""
	I1008 14:51:13.508883  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.508893  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:13.508900  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:13.508957  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:13.534938  124886 cri.go:89] found id: ""
	I1008 14:51:13.534954  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.534960  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:13.534964  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:13.535012  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:13.562594  124886 cri.go:89] found id: ""
	I1008 14:51:13.562611  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.562620  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:13.562626  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:13.562683  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:13.588476  124886 cri.go:89] found id: ""
	I1008 14:51:13.588493  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.588505  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:13.588513  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:13.588522  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.617969  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:13.617996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:13.687989  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:13.688010  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:13.702556  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:13.702577  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:13.758238  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:13.758274  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:13.758288  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.324420  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:16.335355  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:16.335413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:16.361211  124886 cri.go:89] found id: ""
	I1008 14:51:16.361227  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.361233  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:16.361238  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:16.361283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:16.388154  124886 cri.go:89] found id: ""
	I1008 14:51:16.388170  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.388176  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:16.388180  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:16.388234  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:16.414515  124886 cri.go:89] found id: ""
	I1008 14:51:16.414532  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.414539  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:16.414545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:16.414606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:16.441112  124886 cri.go:89] found id: ""
	I1008 14:51:16.441130  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.441137  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:16.441143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:16.441196  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:16.467403  124886 cri.go:89] found id: ""
	I1008 14:51:16.467423  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.467434  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:16.467439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:16.467515  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:16.493912  124886 cri.go:89] found id: ""
	I1008 14:51:16.493994  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.494017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:16.494025  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:16.494086  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:16.520736  124886 cri.go:89] found id: ""
	I1008 14:51:16.520754  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.520761  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:16.520770  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:16.520784  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:16.578205  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:16.578222  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:16.578237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.641639  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:16.641661  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:16.671073  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:16.671090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:16.740879  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:16.740901  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.256721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:19.267621  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:19.267671  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:19.293587  124886 cri.go:89] found id: ""
	I1008 14:51:19.293605  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.293611  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:19.293616  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:19.293661  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:19.318866  124886 cri.go:89] found id: ""
	I1008 14:51:19.318886  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.318898  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:19.318905  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:19.318973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:19.344646  124886 cri.go:89] found id: ""
	I1008 14:51:19.344660  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.344668  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:19.344673  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:19.344730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:19.370979  124886 cri.go:89] found id: ""
	I1008 14:51:19.370994  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.371001  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:19.371006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:19.371049  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:19.398115  124886 cri.go:89] found id: ""
	I1008 14:51:19.398134  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.398144  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:19.398149  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:19.398205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:19.425579  124886 cri.go:89] found id: ""
	I1008 14:51:19.425594  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.425602  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:19.425606  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:19.425664  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:19.451179  124886 cri.go:89] found id: ""
	I1008 14:51:19.451194  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.451201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:19.451209  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:19.451219  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:19.515409  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:19.515430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.530193  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:19.530208  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:19.587513  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:19.587527  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:19.587538  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:19.650244  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:19.650266  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:22.181221  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:22.192437  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:22.192530  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:22.218691  124886 cri.go:89] found id: ""
	I1008 14:51:22.218709  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.218717  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:22.218722  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:22.218784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:22.245011  124886 cri.go:89] found id: ""
	I1008 14:51:22.245028  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.245035  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:22.245040  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:22.245087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:22.271669  124886 cri.go:89] found id: ""
	I1008 14:51:22.271698  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.271706  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:22.271710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:22.271775  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:22.298500  124886 cri.go:89] found id: ""
	I1008 14:51:22.298520  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.298529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:22.298537  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:22.298598  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:22.324858  124886 cri.go:89] found id: ""
	I1008 14:51:22.324873  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.324879  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:22.324883  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:22.324930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:22.351540  124886 cri.go:89] found id: ""
	I1008 14:51:22.351556  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.351563  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:22.351568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:22.351613  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:22.377421  124886 cri.go:89] found id: ""
	I1008 14:51:22.377458  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.377470  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:22.377482  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:22.377497  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:22.450410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:22.450465  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:22.465230  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:22.465257  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:22.521387  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:22.521398  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:22.521409  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:22.586462  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:22.586490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.117667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:25.129264  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:25.129309  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:25.155977  124886 cri.go:89] found id: ""
	I1008 14:51:25.155998  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.156007  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:25.156016  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:25.156090  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:25.183268  124886 cri.go:89] found id: ""
	I1008 14:51:25.183288  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.183297  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:25.183302  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:25.183355  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:25.209728  124886 cri.go:89] found id: ""
	I1008 14:51:25.209745  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.209752  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:25.209763  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:25.209807  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:25.236946  124886 cri.go:89] found id: ""
	I1008 14:51:25.236961  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.236968  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:25.236974  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:25.237017  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:25.263116  124886 cri.go:89] found id: ""
	I1008 14:51:25.263132  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.263138  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:25.263143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:25.263189  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:25.288378  124886 cri.go:89] found id: ""
	I1008 14:51:25.288395  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.288401  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:25.288406  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:25.288460  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:25.315195  124886 cri.go:89] found id: ""
	I1008 14:51:25.315210  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.315217  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:25.315225  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:25.315237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:25.371376  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:25.371387  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:25.371396  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:25.435272  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:25.435294  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.465980  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:25.465996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:25.535450  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:25.535477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.050276  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:28.061620  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:28.061668  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:28.088245  124886 cri.go:89] found id: ""
	I1008 14:51:28.088265  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.088274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:28.088278  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:28.088326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:28.113839  124886 cri.go:89] found id: ""
	I1008 14:51:28.113859  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.113870  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:28.113876  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:28.113940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:28.141395  124886 cri.go:89] found id: ""
	I1008 14:51:28.141414  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.141423  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:28.141429  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:28.141503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:28.168333  124886 cri.go:89] found id: ""
	I1008 14:51:28.168348  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.168354  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:28.168360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:28.168413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:28.192847  124886 cri.go:89] found id: ""
	I1008 14:51:28.192864  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.192870  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:28.192876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:28.192936  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:28.218780  124886 cri.go:89] found id: ""
	I1008 14:51:28.218795  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.218801  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:28.218806  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:28.218875  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:28.244592  124886 cri.go:89] found id: ""
	I1008 14:51:28.244612  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.244622  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:28.244631  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:28.244643  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:28.315714  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:28.315736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.329938  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:28.329954  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:28.387618  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:28.387629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:28.387641  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:28.453202  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:28.453224  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:30.984664  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:30.995891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:30.995939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:31.022304  124886 cri.go:89] found id: ""
	I1008 14:51:31.022328  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.022338  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:31.022344  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:31.022401  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:31.049041  124886 cri.go:89] found id: ""
	I1008 14:51:31.049060  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.049069  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:31.049075  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:31.049123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:31.076924  124886 cri.go:89] found id: ""
	I1008 14:51:31.076940  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.076949  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:31.076953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:31.077003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:31.102922  124886 cri.go:89] found id: ""
	I1008 14:51:31.102942  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.102950  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:31.102955  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:31.103003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:31.131223  124886 cri.go:89] found id: ""
	I1008 14:51:31.131237  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.131244  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:31.131248  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:31.131294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:31.157335  124886 cri.go:89] found id: ""
	I1008 14:51:31.157350  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.157356  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:31.157361  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:31.157403  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:31.183539  124886 cri.go:89] found id: ""
	I1008 14:51:31.183556  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.183563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:31.183571  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:31.183582  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:31.254970  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:31.254991  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:31.269535  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:31.269556  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:31.325660  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:31.325690  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:31.325702  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:31.390180  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:31.390201  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:33.920121  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:33.931525  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:33.931580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:33.956578  124886 cri.go:89] found id: ""
	I1008 14:51:33.956594  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.956601  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:33.956606  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:33.956652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:33.983065  124886 cri.go:89] found id: ""
	I1008 14:51:33.983083  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.983094  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:33.983100  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:33.983176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:34.009180  124886 cri.go:89] found id: ""
	I1008 14:51:34.009198  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.009206  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:34.009211  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:34.009266  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:34.035120  124886 cri.go:89] found id: ""
	I1008 14:51:34.035138  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.035145  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:34.035151  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:34.035207  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:34.060490  124886 cri.go:89] found id: ""
	I1008 14:51:34.060506  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.060512  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:34.060517  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:34.060565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:34.086320  124886 cri.go:89] found id: ""
	I1008 14:51:34.086338  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.086346  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:34.086351  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:34.086394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:34.111862  124886 cri.go:89] found id: ""
	I1008 14:51:34.111883  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.111893  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:34.111902  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:34.111921  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:34.181743  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:34.181765  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:34.196152  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:34.196171  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:34.252034  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:34.252045  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:34.252056  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:34.316760  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:34.316781  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:36.845595  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:36.856603  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:36.856648  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:36.883175  124886 cri.go:89] found id: ""
	I1008 14:51:36.883194  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.883202  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:36.883209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:36.883267  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:36.910081  124886 cri.go:89] found id: ""
	I1008 14:51:36.910096  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.910103  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:36.910107  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:36.910157  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:36.935036  124886 cri.go:89] found id: ""
	I1008 14:51:36.935051  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.935062  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:36.935068  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:36.935122  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:36.961981  124886 cri.go:89] found id: ""
	I1008 14:51:36.961998  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.962009  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:36.962016  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:36.962126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:36.989270  124886 cri.go:89] found id: ""
	I1008 14:51:36.989290  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.989299  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:36.989306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:36.989363  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:37.016135  124886 cri.go:89] found id: ""
	I1008 14:51:37.016153  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.016161  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:37.016165  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:37.016215  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:37.043172  124886 cri.go:89] found id: ""
	I1008 14:51:37.043191  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.043201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:37.043211  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:37.043227  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:37.100326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:37.100338  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:37.100351  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:37.163756  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:37.163777  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:37.193435  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:37.193471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:37.260908  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:37.260933  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:39.777967  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:39.789007  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:39.789059  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:39.815862  124886 cri.go:89] found id: ""
	I1008 14:51:39.815879  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.815886  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:39.815890  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:39.815942  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:39.841950  124886 cri.go:89] found id: ""
	I1008 14:51:39.841966  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.841973  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:39.841979  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:39.842039  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:39.868668  124886 cri.go:89] found id: ""
	I1008 14:51:39.868686  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.868696  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:39.868702  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:39.868755  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:39.895534  124886 cri.go:89] found id: ""
	I1008 14:51:39.895554  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.895564  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:39.895571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:39.895622  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:39.922579  124886 cri.go:89] found id: ""
	I1008 14:51:39.922598  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.922608  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:39.922614  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:39.922660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:39.948340  124886 cri.go:89] found id: ""
	I1008 14:51:39.948356  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.948363  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:39.948367  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:39.948410  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:39.975730  124886 cri.go:89] found id: ""
	I1008 14:51:39.975746  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.975752  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:39.975761  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:39.975771  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:40.004995  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:40.005014  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:40.075523  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:40.075546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:40.090104  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:40.090120  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:40.147226  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:40.147238  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:40.147253  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:42.711983  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:42.723356  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:42.723413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:42.749822  124886 cri.go:89] found id: ""
	I1008 14:51:42.749838  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.749844  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:42.749849  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:42.749917  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:42.776397  124886 cri.go:89] found id: ""
	I1008 14:51:42.776414  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.776421  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:42.776425  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:42.776493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:42.802489  124886 cri.go:89] found id: ""
	I1008 14:51:42.802508  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.802518  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:42.802524  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:42.802572  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:42.829172  124886 cri.go:89] found id: ""
	I1008 14:51:42.829187  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.829193  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:42.829198  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:42.829251  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:42.853534  124886 cri.go:89] found id: ""
	I1008 14:51:42.853552  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.853561  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:42.853568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:42.853635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:42.879567  124886 cri.go:89] found id: ""
	I1008 14:51:42.879583  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.879595  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:42.879601  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:42.879652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:42.904961  124886 cri.go:89] found id: ""
	I1008 14:51:42.904979  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.904986  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:42.904993  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:42.905009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:42.974363  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:42.974384  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:42.989172  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:42.989192  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:43.045247  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:43.045260  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:43.045275  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:43.106406  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:43.106429  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:45.637311  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:45.648040  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:45.648095  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:45.673462  124886 cri.go:89] found id: ""
	I1008 14:51:45.673481  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.673491  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:45.673497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:45.673550  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:45.698163  124886 cri.go:89] found id: ""
	I1008 14:51:45.698181  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.698188  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:45.698193  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:45.698246  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:45.723467  124886 cri.go:89] found id: ""
	I1008 14:51:45.723561  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.723573  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:45.723581  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:45.723641  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:45.748702  124886 cri.go:89] found id: ""
	I1008 14:51:45.748717  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.748726  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:45.748732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:45.748796  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:45.775585  124886 cri.go:89] found id: ""
	I1008 14:51:45.775604  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.775612  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:45.775617  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:45.775670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:45.801010  124886 cri.go:89] found id: ""
	I1008 14:51:45.801025  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.801031  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:45.801036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:45.801084  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:45.827042  124886 cri.go:89] found id: ""
	I1008 14:51:45.827059  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.827067  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:45.827075  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:45.827086  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:45.895458  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:45.895480  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:45.910085  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:45.910109  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:45.966571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:45.966593  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:45.966605  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:46.027581  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:46.027606  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:48.557168  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:48.568079  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:48.568130  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:48.594574  124886 cri.go:89] found id: ""
	I1008 14:51:48.594594  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.594603  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:48.594609  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:48.594653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:48.621962  124886 cri.go:89] found id: ""
	I1008 14:51:48.621977  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.621984  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:48.621989  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:48.622035  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:48.648065  124886 cri.go:89] found id: ""
	I1008 14:51:48.648080  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.648087  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:48.648091  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:48.648146  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:48.675285  124886 cri.go:89] found id: ""
	I1008 14:51:48.675300  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.675307  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:48.675311  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:48.675356  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:48.701191  124886 cri.go:89] found id: ""
	I1008 14:51:48.701210  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.701218  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:48.701225  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:48.701271  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:48.729042  124886 cri.go:89] found id: ""
	I1008 14:51:48.729069  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.729079  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:48.729086  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:48.729136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:48.754548  124886 cri.go:89] found id: ""
	I1008 14:51:48.754564  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.754572  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:48.754580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:48.754590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:48.822673  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:48.822705  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:48.836997  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:48.837017  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:48.894196  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:48.894212  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:48.894223  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:48.955101  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:48.955127  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.487365  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:51.498554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:51.498603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:51.525066  124886 cri.go:89] found id: ""
	I1008 14:51:51.525081  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.525088  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:51.525094  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:51.525147  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:51.550909  124886 cri.go:89] found id: ""
	I1008 14:51:51.550926  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.550933  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:51.550938  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:51.550989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:51.576844  124886 cri.go:89] found id: ""
	I1008 14:51:51.576860  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.576867  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:51.576871  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:51.576919  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:51.603876  124886 cri.go:89] found id: ""
	I1008 14:51:51.603894  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.603900  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:51.603907  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:51.603958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:51.630518  124886 cri.go:89] found id: ""
	I1008 14:51:51.630533  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.630540  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:51.630545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:51.630591  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:51.656592  124886 cri.go:89] found id: ""
	I1008 14:51:51.656625  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.656634  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:51.656641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:51.656686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:51.682732  124886 cri.go:89] found id: ""
	I1008 14:51:51.682750  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.682757  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:51.682766  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:51.682775  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:51.742589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:51.742612  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.771353  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:51.771369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:51.842948  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:51.842971  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:51.857862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:51.857882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:51.915551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.417267  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:54.428273  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:54.428333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:54.454016  124886 cri.go:89] found id: ""
	I1008 14:51:54.454030  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.454037  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:54.454042  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:54.454097  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:54.479088  124886 cri.go:89] found id: ""
	I1008 14:51:54.479104  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.479112  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:54.479117  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:54.479171  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:54.504383  124886 cri.go:89] found id: ""
	I1008 14:51:54.504401  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.504411  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:54.504418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:54.504481  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:54.530502  124886 cri.go:89] found id: ""
	I1008 14:51:54.530522  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.530529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:54.530534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:54.530578  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:54.556899  124886 cri.go:89] found id: ""
	I1008 14:51:54.556920  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.556929  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:54.556935  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:54.556983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:54.582860  124886 cri.go:89] found id: ""
	I1008 14:51:54.582878  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.582888  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:54.582895  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:54.582954  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:54.609653  124886 cri.go:89] found id: ""
	I1008 14:51:54.609670  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.609679  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:54.609689  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:54.609704  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:54.666095  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.666106  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:54.666116  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:54.725670  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:54.725693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:54.755377  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:54.755394  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:54.824839  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:54.824860  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.340378  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:57.351013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:57.351087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:57.377174  124886 cri.go:89] found id: ""
	I1008 14:51:57.377192  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.377201  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:57.377208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:57.377259  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:57.403239  124886 cri.go:89] found id: ""
	I1008 14:51:57.403254  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.403261  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:57.403271  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:57.403317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:57.429149  124886 cri.go:89] found id: ""
	I1008 14:51:57.429168  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.429179  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:57.429185  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:57.429244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:57.454095  124886 cri.go:89] found id: ""
	I1008 14:51:57.454114  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.454128  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:57.454133  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:57.454187  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:57.479640  124886 cri.go:89] found id: ""
	I1008 14:51:57.479658  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.479665  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:57.479670  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:57.479725  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:57.505776  124886 cri.go:89] found id: ""
	I1008 14:51:57.505795  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.505805  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:57.505811  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:57.505853  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:57.531837  124886 cri.go:89] found id: ""
	I1008 14:51:57.531852  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.531860  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:57.531867  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:57.531878  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:57.599522  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:57.599544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.614111  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:57.614132  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:57.671063  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:57.671074  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:57.671084  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:57.732027  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:57.732050  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:00.263338  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:00.274100  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:00.274167  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:00.299677  124886 cri.go:89] found id: ""
	I1008 14:52:00.299692  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.299698  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:00.299703  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:00.299744  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:00.325037  124886 cri.go:89] found id: ""
	I1008 14:52:00.325055  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.325065  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:00.325071  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:00.325128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:00.351372  124886 cri.go:89] found id: ""
	I1008 14:52:00.351388  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.351397  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:00.351402  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:00.351465  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:00.377746  124886 cri.go:89] found id: ""
	I1008 14:52:00.377761  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.377767  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:00.377772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:00.377838  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:00.403806  124886 cri.go:89] found id: ""
	I1008 14:52:00.403821  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.403827  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:00.403832  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:00.403888  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:00.431653  124886 cri.go:89] found id: ""
	I1008 14:52:00.431673  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.431682  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:00.431687  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:00.431732  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:00.458706  124886 cri.go:89] found id: ""
	I1008 14:52:00.458720  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.458727  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:00.458735  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:00.458744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:00.527333  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:00.527355  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:00.545238  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:00.545260  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:00.604166  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:00.604178  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:00.604190  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:00.667338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:00.667360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.196993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:03.207677  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:03.207730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:03.232932  124886 cri.go:89] found id: ""
	I1008 14:52:03.232952  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.232963  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:03.232969  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:03.233019  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:03.257910  124886 cri.go:89] found id: ""
	I1008 14:52:03.257927  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.257934  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:03.257939  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:03.257989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:03.282476  124886 cri.go:89] found id: ""
	I1008 14:52:03.282491  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.282498  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:03.282503  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:03.282556  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:03.307994  124886 cri.go:89] found id: ""
	I1008 14:52:03.308009  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.308016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:03.308020  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:03.308066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:03.333961  124886 cri.go:89] found id: ""
	I1008 14:52:03.333978  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.333985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:03.333990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:03.334036  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:03.360461  124886 cri.go:89] found id: ""
	I1008 14:52:03.360480  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.360491  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:03.360498  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:03.360546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:03.385935  124886 cri.go:89] found id: ""
	I1008 14:52:03.385951  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.385958  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:03.385965  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:03.385980  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:03.399673  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:03.399689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:03.456423  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:03.456433  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:03.456459  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:03.519728  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:03.519750  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.549347  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:03.549365  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.121403  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:06.132277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:06.132329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:06.158234  124886 cri.go:89] found id: ""
	I1008 14:52:06.158248  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.158255  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:06.158260  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:06.158308  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:06.184118  124886 cri.go:89] found id: ""
	I1008 14:52:06.184136  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.184145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:06.184151  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:06.184201  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:06.210586  124886 cri.go:89] found id: ""
	I1008 14:52:06.210604  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.210613  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:06.210619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:06.210682  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:06.236986  124886 cri.go:89] found id: ""
	I1008 14:52:06.237004  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.237013  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:06.237018  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:06.237064  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:06.264151  124886 cri.go:89] found id: ""
	I1008 14:52:06.264172  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.264182  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:06.264188  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:06.264240  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:06.290106  124886 cri.go:89] found id: ""
	I1008 14:52:06.290120  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.290126  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:06.290132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:06.290177  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:06.316419  124886 cri.go:89] found id: ""
	I1008 14:52:06.316435  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.316453  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:06.316464  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:06.316477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:06.377522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:06.377544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:06.407056  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:06.407075  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.474318  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:06.474342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:06.488482  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:06.488502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:06.546904  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.048569  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:09.059380  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:09.059436  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:09.085888  124886 cri.go:89] found id: ""
	I1008 14:52:09.085906  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.085912  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:09.085918  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:09.085971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:09.113858  124886 cri.go:89] found id: ""
	I1008 14:52:09.113875  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.113882  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:09.113892  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:09.113939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:09.140388  124886 cri.go:89] found id: ""
	I1008 14:52:09.140407  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.140414  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:09.140420  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:09.140493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:09.168003  124886 cri.go:89] found id: ""
	I1008 14:52:09.168018  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.168025  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:09.168030  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:09.168075  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:09.194655  124886 cri.go:89] found id: ""
	I1008 14:52:09.194681  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.194690  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:09.194696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:09.194757  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:09.221388  124886 cri.go:89] found id: ""
	I1008 14:52:09.221405  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.221411  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:09.221416  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:09.221490  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:09.247075  124886 cri.go:89] found id: ""
	I1008 14:52:09.247093  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.247102  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:09.247122  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:09.247133  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:09.304638  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.304650  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:09.304664  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:09.368718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:09.368742  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:09.399217  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:09.399239  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:09.468608  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:09.468629  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:11.984769  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:11.995534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:11.995596  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:12.020218  124886 cri.go:89] found id: ""
	I1008 14:52:12.020234  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.020241  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:12.020247  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:12.020289  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:12.045959  124886 cri.go:89] found id: ""
	I1008 14:52:12.045978  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.045989  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:12.045996  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:12.046103  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:12.072101  124886 cri.go:89] found id: ""
	I1008 14:52:12.072118  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.072125  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:12.072129  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:12.072174  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:12.098793  124886 cri.go:89] found id: ""
	I1008 14:52:12.098808  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.098814  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:12.098819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:12.098871  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:12.124876  124886 cri.go:89] found id: ""
	I1008 14:52:12.124891  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.124900  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:12.124906  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:12.124973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:12.151678  124886 cri.go:89] found id: ""
	I1008 14:52:12.151695  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.151703  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:12.151708  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:12.151764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:12.176969  124886 cri.go:89] found id: ""
	I1008 14:52:12.176986  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.176994  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:12.177004  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:12.177019  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:12.247581  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:12.247604  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:12.262272  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:12.262290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:12.319283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:12.319306  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:12.319318  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:12.383384  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:12.383406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:14.914713  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:14.925495  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:14.925548  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:14.951182  124886 cri.go:89] found id: ""
	I1008 14:52:14.951197  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.951205  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:14.951209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:14.951265  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:14.978925  124886 cri.go:89] found id: ""
	I1008 14:52:14.978941  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.978948  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:14.978953  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:14.979004  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:15.003964  124886 cri.go:89] found id: ""
	I1008 14:52:15.003983  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.003992  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:15.003997  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:15.004061  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:15.030077  124886 cri.go:89] found id: ""
	I1008 14:52:15.030095  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.030102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:15.030107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:15.030154  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:15.055689  124886 cri.go:89] found id: ""
	I1008 14:52:15.055704  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.055711  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:15.055715  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:15.055760  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:15.081174  124886 cri.go:89] found id: ""
	I1008 14:52:15.081191  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.081198  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:15.081203  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:15.081262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:15.107235  124886 cri.go:89] found id: ""
	I1008 14:52:15.107251  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.107257  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:15.107265  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:15.107279  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:15.174130  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:15.174161  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:15.188435  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:15.188471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:15.244706  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:15.244720  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:15.244735  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:15.305071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:15.305098  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:17.835094  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:17.845787  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:17.845870  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:17.871734  124886 cri.go:89] found id: ""
	I1008 14:52:17.871749  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.871757  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:17.871764  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:17.871823  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:17.897412  124886 cri.go:89] found id: ""
	I1008 14:52:17.897433  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.897458  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:17.897467  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:17.897535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:17.925096  124886 cri.go:89] found id: ""
	I1008 14:52:17.925110  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.925117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:17.925122  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:17.925168  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:17.951272  124886 cri.go:89] found id: ""
	I1008 14:52:17.951289  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.951297  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:17.951301  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:17.951347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:17.976965  124886 cri.go:89] found id: ""
	I1008 14:52:17.976985  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.976992  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:17.976998  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:17.977042  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:18.003041  124886 cri.go:89] found id: ""
	I1008 14:52:18.003057  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.003064  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:18.003069  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:18.003113  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:18.028732  124886 cri.go:89] found id: ""
	I1008 14:52:18.028748  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.028756  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:18.028764  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:18.028774  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:18.092440  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:18.092467  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:18.121965  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:18.121984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:18.191653  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:18.191679  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:18.205820  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:18.205839  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:18.261002  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:20.762706  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:20.773592  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:20.773660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:20.799324  124886 cri.go:89] found id: ""
	I1008 14:52:20.799340  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.799347  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:20.799352  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:20.799394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:20.825415  124886 cri.go:89] found id: ""
	I1008 14:52:20.825430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.825436  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:20.825452  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:20.825504  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:20.851415  124886 cri.go:89] found id: ""
	I1008 14:52:20.851430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.851437  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:20.851454  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:20.851503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:20.878438  124886 cri.go:89] found id: ""
	I1008 14:52:20.878476  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.878484  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:20.878489  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:20.878536  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:20.903857  124886 cri.go:89] found id: ""
	I1008 14:52:20.903873  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.903884  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:20.903890  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:20.903948  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:20.930746  124886 cri.go:89] found id: ""
	I1008 14:52:20.930763  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.930770  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:20.930791  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:20.930842  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:20.956487  124886 cri.go:89] found id: ""
	I1008 14:52:20.956504  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.956510  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:20.956518  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:20.956528  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:21.026065  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:21.026087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:21.040112  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:21.040129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:21.095891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:21.095902  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:21.095914  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:21.159107  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:21.159129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:23.687668  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:23.698250  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:23.698317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:23.723805  124886 cri.go:89] found id: ""
	I1008 14:52:23.723832  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.723842  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:23.723850  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:23.723900  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:23.749813  124886 cri.go:89] found id: ""
	I1008 14:52:23.749831  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.749840  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:23.749847  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:23.749918  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:23.774918  124886 cri.go:89] found id: ""
	I1008 14:52:23.774934  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.774940  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:23.774945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:23.774999  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:23.800898  124886 cri.go:89] found id: ""
	I1008 14:52:23.800918  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.800925  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:23.800930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:23.800978  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:23.827330  124886 cri.go:89] found id: ""
	I1008 14:52:23.827348  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.827356  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:23.827360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:23.827405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:23.853485  124886 cri.go:89] found id: ""
	I1008 14:52:23.853503  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.853510  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:23.853515  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:23.853560  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:23.878936  124886 cri.go:89] found id: ""
	I1008 14:52:23.878957  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.878967  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:23.878976  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:23.878994  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:23.934831  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:23.934841  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:23.934851  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:23.993858  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:23.993885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:24.022945  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:24.022962  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:24.092836  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:24.092865  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.608369  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:26.619983  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:26.620060  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:26.646593  124886 cri.go:89] found id: ""
	I1008 14:52:26.646611  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.646621  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:26.646627  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:26.646678  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:26.673294  124886 cri.go:89] found id: ""
	I1008 14:52:26.673310  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.673317  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:26.673324  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:26.673367  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:26.699235  124886 cri.go:89] found id: ""
	I1008 14:52:26.699251  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.699257  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:26.699262  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:26.699320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:26.724993  124886 cri.go:89] found id: ""
	I1008 14:52:26.725009  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.725016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:26.725021  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:26.725074  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:26.749744  124886 cri.go:89] found id: ""
	I1008 14:52:26.749760  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.749767  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:26.749772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:26.749821  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:26.775226  124886 cri.go:89] found id: ""
	I1008 14:52:26.775246  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.775255  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:26.775260  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:26.775316  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:26.805104  124886 cri.go:89] found id: ""
	I1008 14:52:26.805120  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.805128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:26.805136  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:26.805152  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:26.834601  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:26.834618  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:26.900340  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:26.900361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.914389  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:26.914406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:26.969896  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:26.969911  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:26.969927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.531143  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:29.542884  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:29.542952  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:29.570323  124886 cri.go:89] found id: ""
	I1008 14:52:29.570339  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.570345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:29.570350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:29.570395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:29.596735  124886 cri.go:89] found id: ""
	I1008 14:52:29.596750  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.596756  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:29.596762  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:29.596811  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:29.622878  124886 cri.go:89] found id: ""
	I1008 14:52:29.622892  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.622898  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:29.622903  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:29.622950  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:29.648836  124886 cri.go:89] found id: ""
	I1008 14:52:29.648857  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.648880  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:29.648887  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:29.648939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:29.674729  124886 cri.go:89] found id: ""
	I1008 14:52:29.674747  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.674753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:29.674758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:29.674802  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:29.700542  124886 cri.go:89] found id: ""
	I1008 14:52:29.700558  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.700565  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:29.700571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:29.700615  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:29.726353  124886 cri.go:89] found id: ""
	I1008 14:52:29.726369  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.726375  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:29.726383  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:29.726395  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:29.790538  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:29.790560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:29.805071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:29.805087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:29.861336  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:29.861354  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:29.861367  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.921484  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:29.921507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.452001  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:32.462783  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:32.462839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:32.488895  124886 cri.go:89] found id: ""
	I1008 14:52:32.488913  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.488922  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:32.488929  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:32.488977  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:32.514655  124886 cri.go:89] found id: ""
	I1008 14:52:32.514674  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.514683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:32.514688  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:32.514739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:32.542007  124886 cri.go:89] found id: ""
	I1008 14:52:32.542027  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.542037  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:32.542044  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:32.542100  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:32.569946  124886 cri.go:89] found id: ""
	I1008 14:52:32.569963  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.569970  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:32.569976  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:32.570022  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:32.595032  124886 cri.go:89] found id: ""
	I1008 14:52:32.595051  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.595061  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:32.595066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:32.595127  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:32.621883  124886 cri.go:89] found id: ""
	I1008 14:52:32.621903  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.621923  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:32.621930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:32.621983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:32.647589  124886 cri.go:89] found id: ""
	I1008 14:52:32.647606  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.647612  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:32.647620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:32.647630  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:32.703098  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:32.703108  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:32.703129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:32.766481  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:32.766502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.794530  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:32.794546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:32.864662  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:32.864687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.381050  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:35.391807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:35.391868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:35.418369  124886 cri.go:89] found id: ""
	I1008 14:52:35.418388  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.418397  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:35.418402  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:35.418467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:35.444660  124886 cri.go:89] found id: ""
	I1008 14:52:35.444676  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.444683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:35.444687  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:35.444736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:35.471158  124886 cri.go:89] found id: ""
	I1008 14:52:35.471183  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.471190  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:35.471195  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:35.471238  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:35.496271  124886 cri.go:89] found id: ""
	I1008 14:52:35.496288  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.496295  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:35.496300  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:35.496345  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:35.521987  124886 cri.go:89] found id: ""
	I1008 14:52:35.522005  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.522015  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:35.522039  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:35.522098  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:35.547647  124886 cri.go:89] found id: ""
	I1008 14:52:35.547664  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.547673  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:35.547678  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:35.547723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:35.573056  124886 cri.go:89] found id: ""
	I1008 14:52:35.573075  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.573085  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:35.573109  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:35.573123  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:35.640898  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:35.640923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.655247  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:35.655265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:35.712555  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:35.712565  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:35.712575  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:35.772556  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:35.772579  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.301881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:38.312627  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:38.312694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:38.337192  124886 cri.go:89] found id: ""
	I1008 14:52:38.337210  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.337220  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:38.337227  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:38.337278  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:38.361703  124886 cri.go:89] found id: ""
	I1008 14:52:38.361721  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.361730  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:38.361736  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:38.361786  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:38.387263  124886 cri.go:89] found id: ""
	I1008 14:52:38.387279  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.387286  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:38.387290  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:38.387334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:38.413808  124886 cri.go:89] found id: ""
	I1008 14:52:38.413824  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.413830  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:38.413835  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:38.413880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:38.440014  124886 cri.go:89] found id: ""
	I1008 14:52:38.440029  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.440036  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:38.440041  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:38.440085  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:38.466144  124886 cri.go:89] found id: ""
	I1008 14:52:38.466164  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.466174  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:38.466181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:38.466229  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:38.491536  124886 cri.go:89] found id: ""
	I1008 14:52:38.491554  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.491563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:38.491573  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:38.491584  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.520248  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:38.520265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:38.588833  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:38.588861  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:38.603136  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:38.603155  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:38.659278  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:38.659290  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:38.659301  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.224716  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:41.235550  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:41.235600  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:41.261421  124886 cri.go:89] found id: ""
	I1008 14:52:41.261436  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.261455  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:41.261463  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:41.261516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:41.286798  124886 cri.go:89] found id: ""
	I1008 14:52:41.286813  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.286839  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:41.286844  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:41.286904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:41.312542  124886 cri.go:89] found id: ""
	I1008 14:52:41.312558  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.312567  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:41.312574  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:41.312623  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:41.339001  124886 cri.go:89] found id: ""
	I1008 14:52:41.339016  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.339022  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:41.339027  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:41.339073  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:41.365019  124886 cri.go:89] found id: ""
	I1008 14:52:41.365040  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.365049  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:41.365056  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:41.365115  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:41.389878  124886 cri.go:89] found id: ""
	I1008 14:52:41.389897  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.389904  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:41.389910  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:41.389960  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:41.415856  124886 cri.go:89] found id: ""
	I1008 14:52:41.415875  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.415884  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:41.415895  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:41.415909  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:41.481175  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:41.481196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:41.495356  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:41.495373  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:41.552891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:41.552910  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:41.552927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.615245  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:41.615282  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:44.146351  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:44.157234  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:44.157294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:44.183016  124886 cri.go:89] found id: ""
	I1008 14:52:44.183032  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.183039  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:44.183044  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:44.183094  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:44.209452  124886 cri.go:89] found id: ""
	I1008 14:52:44.209471  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.209480  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:44.209487  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:44.209535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:44.236057  124886 cri.go:89] found id: ""
	I1008 14:52:44.236079  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.236088  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:44.236094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:44.236165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:44.262249  124886 cri.go:89] found id: ""
	I1008 14:52:44.262265  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.262274  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:44.262281  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:44.262333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:44.288222  124886 cri.go:89] found id: ""
	I1008 14:52:44.288240  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.288249  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:44.288254  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:44.288303  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:44.312991  124886 cri.go:89] found id: ""
	I1008 14:52:44.313009  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.313017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:44.313022  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:44.313066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:44.338794  124886 cri.go:89] found id: ""
	I1008 14:52:44.338814  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.338823  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:44.338835  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:44.338849  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:44.408632  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:44.408655  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:44.423360  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:44.423381  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:44.481035  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:44.481052  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:44.481068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:44.545061  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:44.545093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.075772  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:47.086739  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:47.086782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:47.112465  124886 cri.go:89] found id: ""
	I1008 14:52:47.112483  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.112492  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:47.112497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:47.112546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:47.140124  124886 cri.go:89] found id: ""
	I1008 14:52:47.140139  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.140145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:47.140150  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:47.140194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:47.167347  124886 cri.go:89] found id: ""
	I1008 14:52:47.167366  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.167376  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:47.167382  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:47.167428  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:47.193008  124886 cri.go:89] found id: ""
	I1008 14:52:47.193025  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.193032  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:47.193037  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:47.193081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:47.218907  124886 cri.go:89] found id: ""
	I1008 14:52:47.218922  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.218932  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:47.218938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:47.218992  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:47.244390  124886 cri.go:89] found id: ""
	I1008 14:52:47.244406  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.244413  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:47.244418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:47.244485  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:47.270432  124886 cri.go:89] found id: ""
	I1008 14:52:47.270460  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.270473  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:47.270482  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:47.270496  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:47.284419  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:47.284434  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:47.340814  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:47.340829  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:47.340840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:47.405347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:47.405371  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.434675  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:47.434693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:50.001509  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:50.012521  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:50.012580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:50.038871  124886 cri.go:89] found id: ""
	I1008 14:52:50.038886  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.038895  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:50.038901  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:50.038945  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:50.065691  124886 cri.go:89] found id: ""
	I1008 14:52:50.065707  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.065713  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:50.065718  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:50.065764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:50.091421  124886 cri.go:89] found id: ""
	I1008 14:52:50.091439  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.091459  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:50.091466  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:50.091516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:50.117900  124886 cri.go:89] found id: ""
	I1008 14:52:50.117916  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.117922  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:50.117927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:50.117971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:50.143795  124886 cri.go:89] found id: ""
	I1008 14:52:50.143811  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.143837  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:50.143842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:50.143889  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:50.170009  124886 cri.go:89] found id: ""
	I1008 14:52:50.170025  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.170032  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:50.170036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:50.170081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:50.195182  124886 cri.go:89] found id: ""
	I1008 14:52:50.195198  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.195204  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:50.195213  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:50.195226  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:50.208906  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:50.208923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:50.263732  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:50.263744  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:50.263754  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:50.321967  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:50.321990  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:50.350825  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:50.350843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:52.919243  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:52.929975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:52.930069  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:52.956423  124886 cri.go:89] found id: ""
	I1008 14:52:52.956439  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.956463  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:52.956470  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:52.956519  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:52.982128  124886 cri.go:89] found id: ""
	I1008 14:52:52.982143  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.982150  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:52.982155  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:52.982204  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:53.008335  124886 cri.go:89] found id: ""
	I1008 14:52:53.008351  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.008358  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:53.008363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:53.008416  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:53.035683  124886 cri.go:89] found id: ""
	I1008 14:52:53.035698  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.035705  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:53.035710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:53.035753  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:53.061482  124886 cri.go:89] found id: ""
	I1008 14:52:53.061590  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.061610  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:53.061619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:53.061673  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:53.088358  124886 cri.go:89] found id: ""
	I1008 14:52:53.088375  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.088384  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:53.088390  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:53.088467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:53.113970  124886 cri.go:89] found id: ""
	I1008 14:52:53.113988  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.113995  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:53.114003  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:53.114016  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:53.181486  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:53.181511  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:53.195603  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:53.195620  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:53.251571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:53.251582  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:53.251592  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:53.312589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:53.312610  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:55.843180  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:55.854192  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:55.854250  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:55.878967  124886 cri.go:89] found id: ""
	I1008 14:52:55.878984  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.878992  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:55.878997  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:55.879050  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:55.904136  124886 cri.go:89] found id: ""
	I1008 14:52:55.904151  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.904157  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:55.904174  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:55.904216  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:55.928319  124886 cri.go:89] found id: ""
	I1008 14:52:55.928337  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.928348  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:55.928353  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:55.928406  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:55.955314  124886 cri.go:89] found id: ""
	I1008 14:52:55.955330  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.955338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:55.955345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:55.955405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:55.980957  124886 cri.go:89] found id: ""
	I1008 14:52:55.980976  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.980985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:55.980992  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:55.981040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:56.006492  124886 cri.go:89] found id: ""
	I1008 14:52:56.006507  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.006514  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:56.006519  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:56.006566  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:56.032919  124886 cri.go:89] found id: ""
	I1008 14:52:56.032934  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.032940  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:56.032948  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:56.032960  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:56.061693  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:56.061713  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:56.127262  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:56.127284  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:56.141728  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:56.141744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:56.197783  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:56.197799  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:56.197815  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:58.759309  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:58.770096  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:58.770150  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:58.796177  124886 cri.go:89] found id: ""
	I1008 14:52:58.796192  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.796199  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:58.796208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:58.796260  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:58.821988  124886 cri.go:89] found id: ""
	I1008 14:52:58.822006  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.822013  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:58.822018  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:58.822068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:58.847935  124886 cri.go:89] found id: ""
	I1008 14:52:58.847953  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.847961  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:58.847966  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:58.848015  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:58.874796  124886 cri.go:89] found id: ""
	I1008 14:52:58.874814  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.874821  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:58.874826  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:58.874880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:58.899925  124886 cri.go:89] found id: ""
	I1008 14:52:58.899941  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.899948  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:58.899953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:58.900008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:58.926934  124886 cri.go:89] found id: ""
	I1008 14:52:58.926950  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.926958  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:58.926963  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:58.927006  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:58.953664  124886 cri.go:89] found id: ""
	I1008 14:52:58.953680  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.953687  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:58.953694  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:58.953709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:59.010616  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:59.010629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:59.010640  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:59.071358  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:59.071382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:59.099863  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:59.099886  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:59.168071  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:59.168163  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.684667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:01.695456  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:01.695524  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:01.721627  124886 cri.go:89] found id: ""
	I1008 14:53:01.721644  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.721652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:01.721656  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:01.721715  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:01.748495  124886 cri.go:89] found id: ""
	I1008 14:53:01.748512  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.748518  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:01.748523  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:01.748583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:01.774281  124886 cri.go:89] found id: ""
	I1008 14:53:01.774298  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.774310  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:01.774316  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:01.774377  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:01.800414  124886 cri.go:89] found id: ""
	I1008 14:53:01.800430  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.800437  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:01.800458  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:01.800513  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:01.825727  124886 cri.go:89] found id: ""
	I1008 14:53:01.825746  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.825753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:01.825758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:01.825804  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:01.852777  124886 cri.go:89] found id: ""
	I1008 14:53:01.852794  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.852802  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:01.852807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:01.852855  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:01.879499  124886 cri.go:89] found id: ""
	I1008 14:53:01.879516  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.879522  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:01.879530  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:01.879542  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:01.908367  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:01.908386  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:01.976337  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:01.976358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.990844  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:01.990863  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:02.047840  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:02.047852  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:02.047864  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.612824  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:04.623886  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:04.623937  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:04.650245  124886 cri.go:89] found id: ""
	I1008 14:53:04.650265  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.650274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:04.650282  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:04.650338  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:04.675795  124886 cri.go:89] found id: ""
	I1008 14:53:04.675814  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.675849  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:04.675856  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:04.675910  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:04.701855  124886 cri.go:89] found id: ""
	I1008 14:53:04.701874  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.701883  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:04.701889  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:04.701951  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:04.727569  124886 cri.go:89] found id: ""
	I1008 14:53:04.727584  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.727590  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:04.727595  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:04.727637  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:04.753254  124886 cri.go:89] found id: ""
	I1008 14:53:04.753269  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.753276  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:04.753280  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:04.753329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:04.779529  124886 cri.go:89] found id: ""
	I1008 14:53:04.779548  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.779557  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:04.779564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:04.779611  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:04.806307  124886 cri.go:89] found id: ""
	I1008 14:53:04.806326  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.806335  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:04.806346  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:04.806361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:04.820357  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:04.820374  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:04.876718  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:04.876732  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:04.876748  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.940387  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:04.940412  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:04.969994  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:04.970009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.538422  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:07.550831  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:07.550884  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:07.577673  124886 cri.go:89] found id: ""
	I1008 14:53:07.577687  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.577693  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:07.577698  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:07.577750  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:07.603662  124886 cri.go:89] found id: ""
	I1008 14:53:07.603680  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.603695  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:07.603700  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:07.603746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:07.629802  124886 cri.go:89] found id: ""
	I1008 14:53:07.629821  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.629830  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:07.629834  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:07.629886  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:07.656081  124886 cri.go:89] found id: ""
	I1008 14:53:07.656096  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.656102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:07.656107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:07.656170  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:07.682162  124886 cri.go:89] found id: ""
	I1008 14:53:07.682177  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.682184  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:07.682189  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:07.682233  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:07.708617  124886 cri.go:89] found id: ""
	I1008 14:53:07.708635  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.708648  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:07.708653  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:07.708708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:07.734755  124886 cri.go:89] found id: ""
	I1008 14:53:07.734772  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.734782  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:07.734793  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:07.734807  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:07.794522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:07.794548  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:07.823563  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:07.823581  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.892786  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:07.892808  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:07.907262  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:07.907281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:07.962940  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.464656  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:10.476746  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:10.476800  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:10.502937  124886 cri.go:89] found id: ""
	I1008 14:53:10.502958  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.502968  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:10.502974  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:10.503025  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:10.529780  124886 cri.go:89] found id: ""
	I1008 14:53:10.529796  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.529803  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:10.529807  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:10.529856  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:10.556092  124886 cri.go:89] found id: ""
	I1008 14:53:10.556108  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.556117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:10.556124  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:10.556184  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:10.582264  124886 cri.go:89] found id: ""
	I1008 14:53:10.582281  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.582290  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:10.582296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:10.582354  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:10.608631  124886 cri.go:89] found id: ""
	I1008 14:53:10.608647  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.608655  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:10.608662  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:10.608721  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:10.635697  124886 cri.go:89] found id: ""
	I1008 14:53:10.635715  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.635725  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:10.635732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:10.635793  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:10.661998  124886 cri.go:89] found id: ""
	I1008 14:53:10.662018  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.662028  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:10.662040  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:10.662055  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:10.728096  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:10.728121  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:10.742521  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:10.742543  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:10.799551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.799566  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:10.799578  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:10.863614  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:10.863636  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.396084  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:13.407066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:13.407128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:13.433323  124886 cri.go:89] found id: ""
	I1008 14:53:13.433339  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.433345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:13.433350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:13.433393  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:13.460409  124886 cri.go:89] found id: ""
	I1008 14:53:13.460510  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.460522  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:13.460528  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:13.460589  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:13.487660  124886 cri.go:89] found id: ""
	I1008 14:53:13.487679  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.487689  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:13.487696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:13.487746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:13.515522  124886 cri.go:89] found id: ""
	I1008 14:53:13.515538  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.515546  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:13.515551  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:13.515595  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:13.540751  124886 cri.go:89] found id: ""
	I1008 14:53:13.540767  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.540773  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:13.540778  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:13.540846  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:13.566812  124886 cri.go:89] found id: ""
	I1008 14:53:13.566829  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.566837  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:13.566842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:13.566904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:13.593236  124886 cri.go:89] found id: ""
	I1008 14:53:13.593255  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.593262  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:13.593271  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:13.593281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:13.657627  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:13.657651  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.686303  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:13.686320  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:13.755568  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:13.755591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:13.769800  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:13.769819  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:13.826318  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:16.327013  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:16.337840  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:16.337908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:16.363203  124886 cri.go:89] found id: ""
	I1008 14:53:16.363221  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.363230  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:16.363235  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:16.363288  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:16.388535  124886 cri.go:89] found id: ""
	I1008 14:53:16.388551  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.388557  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:16.388563  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:16.388606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:16.414195  124886 cri.go:89] found id: ""
	I1008 14:53:16.414213  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.414221  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:16.414226  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:16.414274  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:16.440199  124886 cri.go:89] found id: ""
	I1008 14:53:16.440214  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.440221  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:16.440227  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:16.440283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:16.465899  124886 cri.go:89] found id: ""
	I1008 14:53:16.465918  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.465925  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:16.465931  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:16.465976  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:16.491135  124886 cri.go:89] found id: ""
	I1008 14:53:16.491151  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.491157  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:16.491162  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:16.491205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:16.517298  124886 cri.go:89] found id: ""
	I1008 14:53:16.517315  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.517323  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:16.517331  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:16.517342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:16.581777  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:16.581803  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:16.611824  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:16.611843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:16.679935  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:16.679957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:16.694087  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:16.694103  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:16.750382  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:19.252068  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:19.262927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:19.262980  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:19.288263  124886 cri.go:89] found id: ""
	I1008 14:53:19.288280  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.288286  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:19.288291  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:19.288334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:19.314749  124886 cri.go:89] found id: ""
	I1008 14:53:19.314769  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.314776  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:19.314781  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:19.314833  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:19.343105  124886 cri.go:89] found id: ""
	I1008 14:53:19.343124  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.343132  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:19.343137  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:19.343194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:19.369348  124886 cri.go:89] found id: ""
	I1008 14:53:19.369367  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.369376  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:19.369384  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:19.369438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:19.394541  124886 cri.go:89] found id: ""
	I1008 14:53:19.394556  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.394564  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:19.394569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:19.394617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:19.419883  124886 cri.go:89] found id: ""
	I1008 14:53:19.419900  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.419907  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:19.419911  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:19.419959  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:19.447316  124886 cri.go:89] found id: ""
	I1008 14:53:19.447332  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.447339  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:19.447347  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:19.447360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:19.509190  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:19.509213  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:19.538580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:19.538601  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:19.610379  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:19.610406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:19.625094  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:19.625115  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:19.682583  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:22.184381  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:22.195435  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:22.195496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:22.222530  124886 cri.go:89] found id: ""
	I1008 14:53:22.222549  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.222559  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:22.222565  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:22.222631  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:22.249103  124886 cri.go:89] found id: ""
	I1008 14:53:22.249118  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.249125  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:22.249130  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:22.249185  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:22.275859  124886 cri.go:89] found id: ""
	I1008 14:53:22.275877  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.275886  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:22.275891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:22.275944  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:22.301816  124886 cri.go:89] found id: ""
	I1008 14:53:22.301835  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.301845  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:22.301852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:22.301906  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:22.328795  124886 cri.go:89] found id: ""
	I1008 14:53:22.328810  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.328817  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:22.328821  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:22.328877  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:22.355119  124886 cri.go:89] found id: ""
	I1008 14:53:22.355134  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.355141  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:22.355146  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:22.355200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:22.382211  124886 cri.go:89] found id: ""
	I1008 14:53:22.382229  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.382238  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:22.382248  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:22.382262  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:22.442814  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:22.442840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:22.473721  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:22.473746  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:22.539788  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:22.539811  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:22.554277  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:22.554295  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:22.610102  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.110358  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:25.121359  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:25.121409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:25.146726  124886 cri.go:89] found id: ""
	I1008 14:53:25.146741  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.146747  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:25.146752  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:25.146797  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:25.173762  124886 cri.go:89] found id: ""
	I1008 14:53:25.173780  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.173788  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:25.173792  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:25.173839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:25.200613  124886 cri.go:89] found id: ""
	I1008 14:53:25.200630  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.200636  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:25.200641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:25.200686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:25.227307  124886 cri.go:89] found id: ""
	I1008 14:53:25.227327  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.227338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:25.227345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:25.227395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:25.253257  124886 cri.go:89] found id: ""
	I1008 14:53:25.253272  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.253278  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:25.253283  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:25.253329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:25.281060  124886 cri.go:89] found id: ""
	I1008 14:53:25.281077  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.281089  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:25.281094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:25.281140  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:25.306651  124886 cri.go:89] found id: ""
	I1008 14:53:25.306668  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.306678  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:25.306688  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:25.306699  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:25.373410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:25.373433  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:25.388282  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:25.388304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:25.445863  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.445874  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:25.445885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:25.510564  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:25.510590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.041417  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:28.052378  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:28.052432  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:28.078711  124886 cri.go:89] found id: ""
	I1008 14:53:28.078728  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.078734  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:28.078740  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:28.078782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:28.105010  124886 cri.go:89] found id: ""
	I1008 14:53:28.105025  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.105031  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:28.105036  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:28.105088  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:28.131983  124886 cri.go:89] found id: ""
	I1008 14:53:28.132001  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.132011  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:28.132017  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:28.132076  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:28.159135  124886 cri.go:89] found id: ""
	I1008 14:53:28.159153  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.159160  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:28.159166  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:28.159212  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:28.187793  124886 cri.go:89] found id: ""
	I1008 14:53:28.187811  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.187821  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:28.187827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:28.187872  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:28.214232  124886 cri.go:89] found id: ""
	I1008 14:53:28.214251  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.214265  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:28.214272  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:28.214335  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:28.240649  124886 cri.go:89] found id: ""
	I1008 14:53:28.240663  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.240669  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:28.240677  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:28.240687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:28.304071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:28.304094  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.333331  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:28.333346  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:28.401896  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:28.401919  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:28.416514  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:28.416531  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:28.472271  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:30.972553  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:30.983612  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:30.983666  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:31.011336  124886 cri.go:89] found id: ""
	I1008 14:53:31.011350  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.011357  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:31.011362  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:31.011405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:31.036913  124886 cri.go:89] found id: ""
	I1008 14:53:31.036935  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.036944  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:31.036948  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:31.037003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:31.063500  124886 cri.go:89] found id: ""
	I1008 14:53:31.063516  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.063523  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:31.063527  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:31.063582  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:31.091035  124886 cri.go:89] found id: ""
	I1008 14:53:31.091057  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.091066  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:31.091073  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:31.091123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:31.117295  124886 cri.go:89] found id: ""
	I1008 14:53:31.117310  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.117317  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:31.117322  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:31.117372  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:31.143795  124886 cri.go:89] found id: ""
	I1008 14:53:31.143810  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.143815  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:31.143820  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:31.143863  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:31.170134  124886 cri.go:89] found id: ""
	I1008 14:53:31.170150  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.170157  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:31.170164  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:31.170174  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:31.241300  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:31.241324  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:31.255637  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:31.255656  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:31.312716  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:31.312725  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:31.312736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:31.377091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:31.377114  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:33.907080  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:33.918207  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:33.918262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:33.944092  124886 cri.go:89] found id: ""
	I1008 14:53:33.944111  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.944122  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:33.944129  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:33.944192  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:33.970271  124886 cri.go:89] found id: ""
	I1008 14:53:33.970286  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.970293  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:33.970298  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:33.970347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:33.996407  124886 cri.go:89] found id: ""
	I1008 14:53:33.996421  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.996427  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:33.996433  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:33.996503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:34.023513  124886 cri.go:89] found id: ""
	I1008 14:53:34.023533  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.023542  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:34.023549  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:34.023606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:34.050777  124886 cri.go:89] found id: ""
	I1008 14:53:34.050797  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.050807  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:34.050813  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:34.050868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:34.077691  124886 cri.go:89] found id: ""
	I1008 14:53:34.077710  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.077719  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:34.077724  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:34.077769  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:34.104354  124886 cri.go:89] found id: ""
	I1008 14:53:34.104373  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.104380  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:34.104388  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:34.104404  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:34.171873  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:34.171899  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:34.185891  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:34.185908  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:34.243162  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:34.243172  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:34.243185  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:34.306766  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:34.306791  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:36.836905  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:36.848013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:36.848068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:36.873912  124886 cri.go:89] found id: ""
	I1008 14:53:36.873930  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.873938  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:36.873944  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:36.873994  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:36.899859  124886 cri.go:89] found id: ""
	I1008 14:53:36.899875  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.899881  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:36.899886  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:36.899930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:36.926292  124886 cri.go:89] found id: ""
	I1008 14:53:36.926314  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.926321  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:36.926326  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:36.926370  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:36.952172  124886 cri.go:89] found id: ""
	I1008 14:53:36.952189  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.952196  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:36.952201  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:36.952248  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:36.978525  124886 cri.go:89] found id: ""
	I1008 14:53:36.978542  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.978548  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:36.978553  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:36.978605  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:37.005955  124886 cri.go:89] found id: ""
	I1008 14:53:37.005973  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.005984  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:37.005990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:37.006037  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:37.032282  124886 cri.go:89] found id: ""
	I1008 14:53:37.032300  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.032310  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:37.032320  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:37.032336  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:37.100471  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:37.100507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:37.114707  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:37.114727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:37.173117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:37.173128  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:37.173138  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:37.237613  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:37.237637  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:39.769167  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:39.780181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:39.780239  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:39.805900  124886 cri.go:89] found id: ""
	I1008 14:53:39.805921  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.805928  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:39.805935  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:39.805982  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:39.832463  124886 cri.go:89] found id: ""
	I1008 14:53:39.832485  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.832493  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:39.832501  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:39.832565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:39.859105  124886 cri.go:89] found id: ""
	I1008 14:53:39.859120  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.859127  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:39.859132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:39.859176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:39.885372  124886 cri.go:89] found id: ""
	I1008 14:53:39.885395  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.885402  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:39.885410  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:39.885476  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:39.911669  124886 cri.go:89] found id: ""
	I1008 14:53:39.911684  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.911691  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:39.911696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:39.911743  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:39.939236  124886 cri.go:89] found id: ""
	I1008 14:53:39.939254  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.939263  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:39.939269  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:39.939329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:39.967816  124886 cri.go:89] found id: ""
	I1008 14:53:39.967833  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.967839  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:39.967847  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:39.967859  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:39.982071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:39.982090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:40.038524  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:40.038545  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:40.038560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:40.099347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:40.099369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:40.128637  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:40.128654  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.700345  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:42.711170  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:42.711224  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:42.738404  124886 cri.go:89] found id: ""
	I1008 14:53:42.738420  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.738426  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:42.738431  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:42.738496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:42.765170  124886 cri.go:89] found id: ""
	I1008 14:53:42.765185  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.765192  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:42.765196  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:42.765244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:42.790844  124886 cri.go:89] found id: ""
	I1008 14:53:42.790862  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.790870  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:42.790876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:42.790920  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:42.817749  124886 cri.go:89] found id: ""
	I1008 14:53:42.817765  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.817772  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:42.817777  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:42.817826  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:42.844796  124886 cri.go:89] found id: ""
	I1008 14:53:42.844815  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.844823  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:42.844827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:42.844882  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:42.870976  124886 cri.go:89] found id: ""
	I1008 14:53:42.870993  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.871001  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:42.871006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:42.871051  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:42.897679  124886 cri.go:89] found id: ""
	I1008 14:53:42.897698  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.897707  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:42.897716  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:42.897727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.967720  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:42.967744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:42.981967  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:42.981984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:43.039728  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:43.039742  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:43.039753  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:43.101886  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:43.101911  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:45.635598  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:45.646564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:45.646617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:45.673775  124886 cri.go:89] found id: ""
	I1008 14:53:45.673791  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.673797  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:45.673802  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:45.673845  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:45.700610  124886 cri.go:89] found id: ""
	I1008 14:53:45.700627  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.700633  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:45.700638  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:45.700694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:45.726636  124886 cri.go:89] found id: ""
	I1008 14:53:45.726653  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.726662  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:45.726669  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:45.726723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:45.753352  124886 cri.go:89] found id: ""
	I1008 14:53:45.753367  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.753374  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:45.753379  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:45.753434  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:45.780250  124886 cri.go:89] found id: ""
	I1008 14:53:45.780266  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.780272  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:45.780277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:45.780326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:45.805847  124886 cri.go:89] found id: ""
	I1008 14:53:45.805863  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.805870  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:45.805875  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:45.805940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:45.832274  124886 cri.go:89] found id: ""
	I1008 14:53:45.832290  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.832297  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:45.832304  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:45.832315  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:45.901895  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:45.901925  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:45.916420  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:45.916438  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:45.972937  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:45.972948  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:45.972958  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:46.034817  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:46.034841  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.564993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:48.576052  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:48.576102  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:48.602007  124886 cri.go:89] found id: ""
	I1008 14:53:48.602024  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.602031  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:48.602035  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:48.602080  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:48.628143  124886 cri.go:89] found id: ""
	I1008 14:53:48.628160  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.628168  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:48.628173  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:48.628218  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:48.655880  124886 cri.go:89] found id: ""
	I1008 14:53:48.655898  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.655907  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:48.655913  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:48.655958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:48.683255  124886 cri.go:89] found id: ""
	I1008 14:53:48.683270  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.683278  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:48.683284  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:48.683337  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:48.709473  124886 cri.go:89] found id: ""
	I1008 14:53:48.709492  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.709501  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:48.709508  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:48.709567  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:48.736246  124886 cri.go:89] found id: ""
	I1008 14:53:48.736268  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.736274  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:48.736279  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:48.736327  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:48.763463  124886 cri.go:89] found id: ""
	I1008 14:53:48.763483  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.763493  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:48.763503  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:48.763518  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.792359  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:48.792378  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:48.859056  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:48.859077  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:48.873385  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:48.873405  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:48.931065  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:48.931075  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:48.931087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:51.494941  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:51.505819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:51.505869  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:51.533622  124886 cri.go:89] found id: ""
	I1008 14:53:51.533643  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.533652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:51.533659  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:51.533707  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:51.560499  124886 cri.go:89] found id: ""
	I1008 14:53:51.560519  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.560528  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:51.560536  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:51.560584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:51.587541  124886 cri.go:89] found id: ""
	I1008 14:53:51.587556  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.587564  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:51.587569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:51.587616  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:51.614266  124886 cri.go:89] found id: ""
	I1008 14:53:51.614284  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.614291  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:51.614296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:51.614343  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:51.639614  124886 cri.go:89] found id: ""
	I1008 14:53:51.639632  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.639641  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:51.639649  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:51.639708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:51.667306  124886 cri.go:89] found id: ""
	I1008 14:53:51.667322  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.667328  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:51.667333  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:51.667375  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:51.692160  124886 cri.go:89] found id: ""
	I1008 14:53:51.692175  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.692182  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:51.692191  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:51.692204  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:51.720341  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:51.720358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:51.785600  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:51.785622  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:51.800298  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:51.800317  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:51.857283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:51.857293  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:51.857304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:54.424673  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:54.435975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:54.436023  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:54.462429  124886 cri.go:89] found id: ""
	I1008 14:53:54.462462  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.462472  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:54.462479  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:54.462528  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:54.489261  124886 cri.go:89] found id: ""
	I1008 14:53:54.489276  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.489284  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:54.489289  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:54.489344  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:54.514962  124886 cri.go:89] found id: ""
	I1008 14:53:54.514980  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.514990  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:54.514996  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:54.515040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:54.541414  124886 cri.go:89] found id: ""
	I1008 14:53:54.541428  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.541435  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:54.541439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:54.541501  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:54.567913  124886 cri.go:89] found id: ""
	I1008 14:53:54.567931  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.567940  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:54.567945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:54.568008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:54.594492  124886 cri.go:89] found id: ""
	I1008 14:53:54.594511  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.594522  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:54.594528  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:54.594583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:54.621305  124886 cri.go:89] found id: ""
	I1008 14:53:54.621321  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.621330  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:54.621338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:54.621348  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:54.648627  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:54.648645  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:54.717360  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:54.717382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:54.731905  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:54.731923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:54.788630  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:54.788640  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:54.788650  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.353718  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:57.365518  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:57.365570  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:57.391621  124886 cri.go:89] found id: ""
	I1008 14:53:57.391638  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.391646  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:57.391650  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:57.391704  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:57.419557  124886 cri.go:89] found id: ""
	I1008 14:53:57.419574  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.419582  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:57.419587  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:57.419643  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:57.447029  124886 cri.go:89] found id: ""
	I1008 14:53:57.447047  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.447059  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:57.447077  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:57.447126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:57.473391  124886 cri.go:89] found id: ""
	I1008 14:53:57.473410  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.473418  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:57.473423  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:57.473494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:57.499437  124886 cri.go:89] found id: ""
	I1008 14:53:57.499472  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.499481  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:57.499486  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:57.499542  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:57.525753  124886 cri.go:89] found id: ""
	I1008 14:53:57.525770  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.525776  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:57.525782  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:57.525827  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:57.555506  124886 cri.go:89] found id: ""
	I1008 14:53:57.555523  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.555529  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:57.555539  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:57.555553  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:57.623045  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:57.623068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:57.637620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:57.637638  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:57.695326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:57.695339  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:57.695356  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.755685  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:57.755710  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:00.285648  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:00.296554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:00.296603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:00.322379  124886 cri.go:89] found id: ""
	I1008 14:54:00.322396  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.322405  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:00.322409  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:00.322474  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:00.349397  124886 cri.go:89] found id: ""
	I1008 14:54:00.349414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.349423  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:00.349429  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:00.349507  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:00.375588  124886 cri.go:89] found id: ""
	I1008 14:54:00.375602  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.375608  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:00.375613  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:00.375670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:00.401398  124886 cri.go:89] found id: ""
	I1008 14:54:00.401414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.401420  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:00.401426  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:00.401494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:00.427652  124886 cri.go:89] found id: ""
	I1008 14:54:00.427668  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.427675  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:00.427680  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:00.427736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:00.451896  124886 cri.go:89] found id: ""
	I1008 14:54:00.451911  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.451918  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:00.451923  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:00.451967  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:00.478107  124886 cri.go:89] found id: ""
	I1008 14:54:00.478122  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.478128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:00.478135  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:00.478145  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:00.547950  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:00.547974  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:00.561968  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:00.561986  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:00.618117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:00.618131  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:00.618141  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:00.683464  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:00.683490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.211808  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:03.222618  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:03.222667  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:03.248716  124886 cri.go:89] found id: ""
	I1008 14:54:03.248732  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.248738  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:03.248742  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:03.248784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:03.275183  124886 cri.go:89] found id: ""
	I1008 14:54:03.275202  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.275209  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:03.275214  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:03.275262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:03.301882  124886 cri.go:89] found id: ""
	I1008 14:54:03.301909  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.301915  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:03.301920  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:03.301966  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:03.328783  124886 cri.go:89] found id: ""
	I1008 14:54:03.328799  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.328811  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:03.328817  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:03.328864  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:03.355235  124886 cri.go:89] found id: ""
	I1008 14:54:03.355251  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.355259  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:03.355268  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:03.355313  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:03.382286  124886 cri.go:89] found id: ""
	I1008 14:54:03.382305  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.382313  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:03.382318  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:03.382371  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:03.408682  124886 cri.go:89] found id: ""
	I1008 14:54:03.408700  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.408708  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:03.408718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:03.408732  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.438177  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:03.438196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:03.507859  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:03.507881  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:03.523723  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:03.523747  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:03.580407  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:03.580418  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:03.580430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.142863  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:06.153852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:06.153912  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:06.180234  124886 cri.go:89] found id: ""
	I1008 14:54:06.180253  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.180264  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:06.180271  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:06.180320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:06.207080  124886 cri.go:89] found id: ""
	I1008 14:54:06.207094  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.207101  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:06.207106  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:06.207152  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:06.232369  124886 cri.go:89] found id: ""
	I1008 14:54:06.232384  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.232390  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:06.232394  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:06.232438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:06.257360  124886 cri.go:89] found id: ""
	I1008 14:54:06.257376  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.257383  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:06.257388  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:06.257433  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:06.284487  124886 cri.go:89] found id: ""
	I1008 14:54:06.284507  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.284516  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:06.284523  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:06.284584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:06.310846  124886 cri.go:89] found id: ""
	I1008 14:54:06.310863  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.310874  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:06.310882  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:06.310935  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:06.337095  124886 cri.go:89] found id: ""
	I1008 14:54:06.337114  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.337121  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:06.337130  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:06.337142  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:06.406561  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:06.406591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:06.421066  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:06.421088  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:06.477926  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:06.477943  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:06.477957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.538516  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:06.538537  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:09.071758  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:09.082621  124886 kubeadm.go:601] duration metric: took 4m3.01446136s to restartPrimaryControlPlane
	W1008 14:54:09.082718  124886 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 14:54:09.082774  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:54:09.534098  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:54:09.546894  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:54:09.555065  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:54:09.555116  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:54:09.563122  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:54:09.563134  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:54:09.563181  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:54:09.571418  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:54:09.571492  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:54:09.579061  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:54:09.587199  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:54:09.587244  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:54:09.594420  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.602223  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:54:09.602263  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.609598  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:54:09.616978  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:54:09.617035  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:54:09.624225  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:54:09.679083  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:54:09.736432  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:58:12.118648  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:58:12.118737  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:58:12.121564  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:58:12.121611  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:58:12.121691  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:58:12.121739  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:58:12.121768  124886 kubeadm.go:318] OS: Linux
	I1008 14:58:12.121805  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:58:12.121846  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:58:12.121885  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:58:12.121936  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:58:12.121975  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:58:12.122056  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:58:12.122130  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:58:12.122194  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:58:12.122280  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:58:12.122381  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:58:12.122523  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:58:12.122608  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:58:12.124721  124886 out.go:252]   - Generating certificates and keys ...
	I1008 14:58:12.124815  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:58:12.124880  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:58:12.124964  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:58:12.125031  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:58:12.125148  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:58:12.125193  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:58:12.125282  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:58:12.125362  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:58:12.125490  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:58:12.125594  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:58:12.125626  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:58:12.125673  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:58:12.125714  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:58:12.125760  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:58:12.125802  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:58:12.125857  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:58:12.125902  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:58:12.125971  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:58:12.126032  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:58:12.128152  124886 out.go:252]   - Booting up control plane ...
	I1008 14:58:12.128237  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:58:12.128300  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:58:12.128371  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:58:12.128508  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:58:12.128583  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:58:12.128689  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:58:12.128762  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:58:12.128794  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:58:12.128904  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:58:12.128993  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:58:12.129038  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.0016053s
	I1008 14:58:12.129115  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:58:12.129187  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:58:12.129304  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:58:12.129408  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:58:12.129490  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	I1008 14:58:12.129546  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	I1008 14:58:12.129607  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	I1008 14:58:12.129609  124886 kubeadm.go:318] 
	I1008 14:58:12.129696  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:58:12.129765  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:58:12.129857  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:58:12.129935  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:58:12.129999  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:58:12.130073  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:58:12.130125  124886 kubeadm.go:318] 
	W1008 14:58:12.130230  124886 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.0016053s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:58:12.130328  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:58:12.582965  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:58:12.596265  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:58:12.596310  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:58:12.604829  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:58:12.604840  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:58:12.604880  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:58:12.613146  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:58:12.613253  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:58:12.621163  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:58:12.629390  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:58:12.629433  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:58:12.637274  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.645831  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:58:12.645886  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.653972  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:58:12.662348  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:58:12.662392  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:58:12.670230  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:58:12.730328  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:58:12.789898  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:02:14.463875  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 15:02:14.464082  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:02:14.466966  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:02:14.467026  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:02:14.467112  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:02:14.467156  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:02:14.467184  124886 kubeadm.go:318] OS: Linux
	I1008 15:02:14.467232  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:02:14.467270  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:02:14.467309  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:02:14.467348  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:02:14.467386  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:02:14.467424  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:02:14.467494  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:02:14.467536  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:02:14.467596  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:02:14.467693  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:02:14.467779  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:02:14.467827  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:02:14.470599  124886 out.go:252]   - Generating certificates and keys ...
	I1008 15:02:14.470674  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:02:14.470757  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:02:14.470867  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:02:14.470954  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:02:14.471017  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:02:14.471091  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:02:14.471148  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:02:14.471198  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:02:14.471289  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:02:14.471353  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:02:14.471382  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:02:14.471424  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:02:14.471487  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:02:14.471529  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:02:14.471569  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:02:14.471615  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:02:14.471657  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:02:14.471734  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:02:14.471802  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:02:14.473075  124886 out.go:252]   - Booting up control plane ...
	I1008 15:02:14.473133  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:02:14.473209  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:02:14.473257  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:02:14.473356  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:02:14.473436  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:02:14.473538  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:02:14.473606  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:02:14.473637  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:02:14.473747  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:02:14.473833  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:02:14.473877  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.93866ms
	I1008 15:02:14.473950  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:02:14.474013  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 15:02:14.474094  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:02:14.474159  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:02:14.474228  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	I1008 15:02:14.474292  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	I1008 15:02:14.474371  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	I1008 15:02:14.474380  124886 kubeadm.go:318] 
	I1008 15:02:14.474476  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:02:14.474542  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:02:14.474617  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:02:14.474713  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:02:14.474773  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:02:14.474854  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:02:14.474900  124886 kubeadm.go:318] 
	I1008 15:02:14.474937  124886 kubeadm.go:402] duration metric: took 12m8.444330692s to StartCluster
	I1008 15:02:14.474986  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:02:14.475048  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:02:14.503050  124886 cri.go:89] found id: ""
	I1008 15:02:14.503067  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.503076  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:02:14.503082  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:02:14.503136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:02:14.530120  124886 cri.go:89] found id: ""
	I1008 15:02:14.530138  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.530145  124886 logs.go:284] No container was found matching "etcd"
	I1008 15:02:14.530149  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:02:14.530200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:02:14.555892  124886 cri.go:89] found id: ""
	I1008 15:02:14.555909  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.555916  124886 logs.go:284] No container was found matching "coredns"
	I1008 15:02:14.555921  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:02:14.555972  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:02:14.583336  124886 cri.go:89] found id: ""
	I1008 15:02:14.583351  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.583358  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:02:14.583363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:02:14.583409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:02:14.611139  124886 cri.go:89] found id: ""
	I1008 15:02:14.611160  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.611169  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:02:14.611175  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:02:14.611227  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:02:14.639405  124886 cri.go:89] found id: ""
	I1008 15:02:14.639422  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.639429  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:02:14.639434  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:02:14.639496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:02:14.666049  124886 cri.go:89] found id: ""
	I1008 15:02:14.666066  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.666073  124886 logs.go:284] No container was found matching "kindnet"
	I1008 15:02:14.666082  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:02:14.666093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:02:14.729847  124886 logs.go:123] Gathering logs for container status ...
	I1008 15:02:14.729877  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 15:02:14.760743  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 15:02:14.760761  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:02:14.827532  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 15:02:14.827555  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:02:14.842256  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:02:14.842273  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:02:14.900360  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1008 15:02:14.900380  124886 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:02:14.900418  124886 out.go:285] * 
	W1008 15:02:14.900560  124886 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.900582  124886 out.go:285] * 
	W1008 15:02:14.902936  124886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:02:14.906609  124886 out.go:203] 
	W1008 15:02:14.908139  124886 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.908172  124886 out.go:285] * 
	I1008 15:02:14.910356  124886 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:11 functional-367186 crio[5841]: time="2025-10-08T15:02:11.236147607Z" level=info msg="createCtr: removing container 8f90e981d591b1813723dfa77b79e967f03eead8d5e3a0d2b53230766b677389" id=442b87ca-4162-43c4-a6a7-06ee1e1feaf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:11 functional-367186 crio[5841]: time="2025-10-08T15:02:11.236182628Z" level=info msg="createCtr: deleting container 8f90e981d591b1813723dfa77b79e967f03eead8d5e3a0d2b53230766b677389 from storage" id=442b87ca-4162-43c4-a6a7-06ee1e1feaf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:11 functional-367186 crio[5841]: time="2025-10-08T15:02:11.238322647Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-367186_kube-system_c9f63674abedb97e40dbf72720752d59_0" id=442b87ca-4162-43c4-a6a7-06ee1e1feaf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.21213297Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e58a632c-ac54-43a6-a140-845f4ef163fe name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.214269396Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5ad01698-37e9-4323-80f8-3474caec0a68 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.215179195Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-367186/kube-scheduler" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.215432207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.218783034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.219253823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.234458319Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.235918816Z" level=info msg="createCtr: deleting container ID 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2 from idIndex" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.235970167Z" level=info msg="createCtr: removing container 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.236014435Z" level=info msg="createCtr: deleting container 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2 from storage" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.238146031Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-367186_kube-system_72fbb4fed11a83b82d196f480544c561_0" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.213078537Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=54863201-7b39-4ed4-ab14-0d41c1a7c865 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.21401263Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=aea1d193-b8b9-4b9f-b6bb-340acce60e77 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.214965671Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.215222603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.218562955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.218978786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.240788352Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242470926Z" level=info msg="createCtr: deleting container ID 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee from idIndex" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242521147Z" level=info msg="createCtr: removing container 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242570796Z" level=info msg="createCtr: deleting container 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee from storage" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.244732312Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:18.002616   15890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:18.003102   15890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:18.004655   15890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:18.005083   15890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:18.006702   15890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:18 up  2:44,  0 user,  load average: 0.16, 0.06, 0.23
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:11 functional-367186 kubelet[14967]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c9f63674abedb97e40dbf72720752d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:11 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:11 functional-367186 kubelet[14967]: E1008 15:02:11.238833   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c9f63674abedb97e40dbf72720752d59"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.211693   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238496   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:12 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:12 functional-367186 kubelet[14967]:  > podSandboxID="e484b96b426485f7bb73491a3eadb180f53489ac5744f9f22e7d4f5f26a4a47a"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238592   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:12 functional-367186 kubelet[14967]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:12 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238621   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.212513   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245058   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:13 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:13 functional-367186 kubelet[14967]:  > podSandboxID="49d755d590c1e6c75fffb26df4018ef3af1ece9b6aef63dbe754f59f467146f3"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245169   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:13 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:13 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245209   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:14 functional-367186 kubelet[14967]: E1008 15:02:14.233845   14967 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 15:02:16 functional-367186 kubelet[14967]: E1008 15:02:16.045402   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	Oct 08 15:02:17 functional-367186 kubelet[14967]: E1008 15:02:17.036703   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 08 15:02:17 functional-367186 kubelet[14967]: E1008 15:02:17.835695   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: I1008 15:02:18.001053   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.001494   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (307.338743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.97s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-367186 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-367186 apply -f testdata/invalidsvc.yaml: exit status 1 (49.352463ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-367186 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-367186 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-367186 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-367186 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-367186 --alsologtostderr -v=1] stderr:
I1008 15:02:31.434072  147145 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:31.434345  147145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:31.434354  147145 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:31.434358  147145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:31.434601  147145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:31.434893  147145 mustload.go:65] Loading cluster: functional-367186
I1008 15:02:31.435371  147145 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:31.435964  147145 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:31.453725  147145 host.go:66] Checking if "functional-367186" exists ...
I1008 15:02:31.454113  147145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 15:02:31.518711  147145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:31.506241031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1008 15:02:31.518853  147145 api_server.go:166] Checking apiserver status ...
I1008 15:02:31.518909  147145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1008 15:02:31.518952  147145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:31.537870  147145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
W1008 15:02:31.646014  147145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1008 15:02:31.648097  147145 out.go:179] * The control-plane node functional-367186 apiserver is not running: (state=Stopped)
I1008 15:02:31.649482  147145 out.go:179]   To start a cluster, run: "minikube start -p functional-367186"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (321.076312ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
I1008 15:02:32.532173   98900 retry.go:31] will retry after 3.963490738s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ tunnel    │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount     │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount3 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount     │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount1 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount     │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount2 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh       │ functional-367186 ssh findmnt -T /mount1                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image     │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ tunnel    │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh       │ functional-367186 ssh findmnt -T /mount1                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh       │ functional-367186 ssh findmnt -T /mount2                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh       │ functional-367186 ssh findmnt -T /mount3                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ mount     │ -p functional-367186 --kill=true                                                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh       │ functional-367186 ssh sudo cat /etc/test/nested/copy/98900/hosts                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image save kicbase/echo-server:functional-367186 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image rm kicbase/echo-server:functional-367186 --alsologtostderr                                                                              │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image save --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ start     │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start     │ -p functional-367186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start     │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ addons    │ functional-367186 addons list                                                                                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ addons    │ functional-367186 addons list -o json                                                                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ dashboard │ --url --port 36195 -p functional-367186 --alsologtostderr -v=1                                                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:02:31
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:02:31.228491  146984 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:02:31.228757  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.228769  146984 out.go:374] Setting ErrFile to fd 2...
	I1008 15:02:31.228775  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.229092  146984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:02:31.229608  146984 out.go:368] Setting JSON to false
	I1008 15:02:31.230544  146984 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9902,"bootTime":1759925849,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:02:31.230642  146984 start.go:141] virtualization: kvm guest
	I1008 15:02:31.232608  146984 out.go:179] * [functional-367186] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1008 15:02:31.234774  146984 notify.go:220] Checking for updates...
	I1008 15:02:31.234788  146984 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:02:31.236372  146984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:02:31.237980  146984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:02:31.239532  146984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:02:31.240888  146984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:02:31.242413  146984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:02:31.244247  146984 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:31.244801  146984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:02:31.271217  146984 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:02:31.271332  146984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:02:31.337074  146984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:31.325606098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:02:31.337200  146984 docker.go:318] overlay module found
	I1008 15:02:31.339135  146984 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1008 15:02:31.340433  146984 start.go:305] selected driver: docker
	I1008 15:02:31.340459  146984 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:02:31.340589  146984 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:02:31.342564  146984 out.go:203] 
	W1008 15:02:31.343899  146984 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 15:02:31.345192  146984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.025485551Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=a4f09100-a89a-48dc-89f1-535c556a80a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.052496663Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.052654742Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.05273608Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.078814601Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.078975874Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.079026616Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.876199233Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=bcb6792f-0817-4dec-aab1-936038b6e1e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.905821555Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.905973538Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.906015096Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934168176Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934313118Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934355764Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.212253616Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=780ea47b-00a3-4ad3-b471-044379f619e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.213350577Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=01f8a6fe-79dc-475e-8a32-44eb2d1fe360 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.21442001Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.214709008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.219408546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.219977147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.239166175Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.240977302Z" level=info msg="createCtr: deleting container ID a219ed28be0ddb1a6676ee003827b02a69726eedbf9e940367e177ee7ac71a98 from idIndex" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.241018868Z" level=info msg="createCtr: removing container a219ed28be0ddb1a6676ee003827b02a69726eedbf9e940367e177ee7ac71a98" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.241056209Z" level=info msg="createCtr: deleting container a219ed28be0ddb1a6676ee003827b02a69726eedbf9e940367e177ee7ac71a98 from storage" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.243658308Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:32.697634   18042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.698260   18042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.700283   18042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.701422   18042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.701932   18042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:32 up  2:45,  0 user,  load average: 1.32, 0.34, 0.32
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:25 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:25 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.252072   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.046948   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.212548   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244164   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:26 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:26 functional-367186 kubelet[14967]:  > podSandboxID="e484b96b426485f7bb73491a3eadb180f53489ac5744f9f22e7d4f5f26a4a47a"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244294   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:26 functional-367186 kubelet[14967]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:26 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244335   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:02:29 functional-367186 kubelet[14967]: E1008 15:02:29.115019   14967 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 08 15:02:29 functional-367186 kubelet[14967]: E1008 15:02:29.438217   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 08 15:02:31 functional-367186 kubelet[14967]: E1008 15:02:31.838233   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: I1008 15:02:32.009938   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.010822   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.211773   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.244004   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:32 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:32 functional-367186 kubelet[14967]:  > podSandboxID="6ab3169b39f563ff749bb50d5d8d7a3bb62a9ced39a9d97f82c3acd85f61e1c9"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.244152   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:32 functional-367186 kubelet[14967]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:32 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.244200   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (333.818493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 status: exit status 2 (369.324081ms)

                                                
                                                
-- stdout --
	functional-367186
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-367186 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (383.754117ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-367186 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 status -o json: exit status 2 (358.269384ms)

                                                
                                                
-- stdout --
	{"Name":"functional-367186","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-367186 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (345.807784ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-367186 logs -n 25: (1.208922059s)
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ cache   │ functional-367186 cache reload                                                                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ ssh     │ functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │ 08 Oct 25 14:49 UTC │
	│ kubectl │ functional-367186 kubectl -- --context functional-367186 get pods                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:49 UTC │                     │
	│ start   │ -p functional-367186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │                     │
	│ cp      │ functional-367186 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config unset cpus                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ service │ functional-367186 service list                                                                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ config  │ functional-367186 config get cpus                                                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ config  │ functional-367186 config set cpus 2                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config get cpus                                                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config unset cpus                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh -n functional-367186 sudo cat /home/docker/cp-test.txt                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh cat /etc/hostname                                                                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config get cpus                                                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ service │ functional-367186 service list -o json                                                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ cp      │ functional-367186 cp functional-367186:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2365031035/001/cp-test.txt │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ service │ functional-367186 service --namespace=default --https --url hello-node                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh -n functional-367186 sudo cat /home/docker/cp-test.txt                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ service │ functional-367186 service hello-node --url --format={{.IP}}                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ service │ functional-367186 service hello-node --url                                                                                 │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ cp      │ functional-367186 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh -n functional-367186 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:50:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:50:02.487614  124886 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:50:02.487885  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.487890  124886 out.go:374] Setting ErrFile to fd 2...
	I1008 14:50:02.487894  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.488148  124886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:50:02.488703  124886 out.go:368] Setting JSON to false
	I1008 14:50:02.489732  124886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9153,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:50:02.489824  124886 start.go:141] virtualization: kvm guest
	I1008 14:50:02.491855  124886 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:50:02.493271  124886 notify.go:220] Checking for updates...
	I1008 14:50:02.493279  124886 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:50:02.494598  124886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:50:02.495836  124886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:50:02.497242  124886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:50:02.498624  124886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:50:02.499973  124886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:50:02.501897  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:02.502018  124886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:50:02.525193  124886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:50:02.525315  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.584022  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.573926988 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.584110  124886 docker.go:318] overlay module found
	I1008 14:50:02.585968  124886 out.go:179] * Using the docker driver based on existing profile
	I1008 14:50:02.587279  124886 start.go:305] selected driver: docker
	I1008 14:50:02.587288  124886 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.587409  124886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:50:02.587529  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.641632  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.631975419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.642294  124886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:50:02.642317  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:02.642374  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:02.642409  124886 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.644427  124886 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:50:02.645877  124886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:50:02.647092  124886 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:50:02.648224  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:02.648254  124886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:50:02.648262  124886 cache.go:58] Caching tarball of preloaded images
	I1008 14:50:02.648344  124886 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:50:02.648340  124886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:50:02.648350  124886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:50:02.648438  124886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:50:02.667989  124886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:50:02.668000  124886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:50:02.668014  124886 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:50:02.668041  124886 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:50:02.668096  124886 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "functional-367186"
	I1008 14:50:02.668109  124886 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:50:02.668113  124886 fix.go:54] fixHost starting: 
	I1008 14:50:02.668337  124886 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:50:02.684543  124886 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:50:02.684562  124886 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:50:02.686414  124886 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:50:02.686441  124886 machine.go:93] provisionDockerMachine start ...
	I1008 14:50:02.686552  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.704251  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.704482  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.704488  124886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:50:02.850612  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:02.850631  124886 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:50:02.850683  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.868208  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.868417  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.868424  124886 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:50:03.024186  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:03.024255  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.041071  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.041277  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.041288  124886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:50:03.186253  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:50:03.186270  124886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:50:03.186287  124886 ubuntu.go:190] setting up certificates
	I1008 14:50:03.186296  124886 provision.go:84] configureAuth start
	I1008 14:50:03.186366  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:03.203498  124886 provision.go:143] copyHostCerts
	I1008 14:50:03.203554  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:50:03.203567  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:50:03.203633  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:50:03.203728  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:50:03.203738  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:50:03.203764  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:50:03.203811  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:50:03.203815  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:50:03.203835  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:50:03.203891  124886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:50:03.342698  124886 provision.go:177] copyRemoteCerts
	I1008 14:50:03.342747  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:50:03.342789  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.359931  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.462754  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:50:03.480100  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:50:03.497218  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:50:03.514367  124886 provision.go:87] duration metric: took 328.059175ms to configureAuth
	I1008 14:50:03.514387  124886 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:50:03.514597  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:03.514714  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.531920  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.532136  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.532149  124886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:50:03.804333  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:50:03.804348  124886 machine.go:96] duration metric: took 1.117888769s to provisionDockerMachine
	I1008 14:50:03.804358  124886 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:50:03.804366  124886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:50:03.804425  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:50:03.804490  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.822222  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.925021  124886 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:50:03.928570  124886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:50:03.928586  124886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:50:03.928595  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:50:03.928648  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:50:03.928714  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:50:03.928776  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:50:03.928851  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:50:03.936383  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:03.953682  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:50:03.970665  124886 start.go:296] duration metric: took 166.291312ms for postStartSetup
	I1008 14:50:03.970729  124886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:03.970760  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.987625  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.086669  124886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:50:04.091298  124886 fix.go:56] duration metric: took 1.423178254s for fixHost
	I1008 14:50:04.091311  124886 start.go:83] releasing machines lock for "functional-367186", held for 1.423209484s
	I1008 14:50:04.091360  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:04.107787  124886 ssh_runner.go:195] Run: cat /version.json
	I1008 14:50:04.107823  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.107871  124886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:50:04.107944  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.125505  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.126027  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.277012  124886 ssh_runner.go:195] Run: systemctl --version
	I1008 14:50:04.283607  124886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:50:04.317281  124886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:50:04.322127  124886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:50:04.322186  124886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:50:04.329933  124886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:50:04.329948  124886 start.go:495] detecting cgroup driver to use...
	I1008 14:50:04.329985  124886 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:50:04.330037  124886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:50:04.344088  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:50:04.355897  124886 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:50:04.355934  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:50:04.370666  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:50:04.383061  124886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:50:04.469185  124886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:50:04.555865  124886 docker.go:234] disabling docker service ...
	I1008 14:50:04.555933  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:50:04.571649  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:50:04.585004  124886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:50:04.673830  124886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:50:04.762936  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:50:04.775689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:50:04.790127  124886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:50:04.790172  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.799414  124886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:50:04.799484  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.808366  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.816703  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.825175  124886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:50:04.833160  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.842121  124886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.850355  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.859028  124886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:50:04.866049  124886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:50:04.873109  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:04.955543  124886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:50:05.069798  124886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:50:05.069856  124886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:50:05.074109  124886 start.go:563] Will wait 60s for crictl version
	I1008 14:50:05.074171  124886 ssh_runner.go:195] Run: which crictl
	I1008 14:50:05.077741  124886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:50:05.103519  124886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:50:05.103581  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.131061  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.160549  124886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:50:05.161770  124886 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:50:05.178428  124886 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:50:05.184282  124886 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1008 14:50:05.185372  124886 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:50:05.185532  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:05.185581  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.219145  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.219157  124886 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:50:05.219203  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.244747  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.244760  124886 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:50:05.244766  124886 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:50:05.244868  124886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:50:05.244932  124886 ssh_runner.go:195] Run: crio config
	I1008 14:50:05.290552  124886 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1008 14:50:05.290627  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:05.290634  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:05.290643  124886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:50:05.290661  124886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:50:05.290774  124886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:50:05.290829  124886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:50:05.299112  124886 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:50:05.299181  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:50:05.307519  124886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:50:05.319796  124886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:50:05.331988  124886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1008 14:50:05.344225  124886 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:50:05.347910  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:05.434760  124886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:50:05.447481  124886 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:50:05.447496  124886 certs.go:195] generating shared ca certs ...
	I1008 14:50:05.447517  124886 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:50:05.447665  124886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:50:05.447699  124886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:50:05.447705  124886 certs.go:257] generating profile certs ...
	I1008 14:50:05.447783  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:50:05.447822  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:50:05.447852  124886 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:50:05.447956  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:50:05.447979  124886 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:50:05.447984  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:50:05.448004  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:50:05.448022  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:50:05.448039  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:50:05.448072  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:05.448723  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:50:05.466280  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:50:05.482753  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:50:05.499451  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:50:05.516010  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:50:05.532903  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:50:05.549460  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:50:05.566552  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:50:05.584248  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:50:05.601250  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:50:05.618600  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:50:05.636280  124886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:50:05.648959  124886 ssh_runner.go:195] Run: openssl version
	I1008 14:50:05.655372  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:50:05.664552  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668508  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668554  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.702319  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:50:05.710597  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:50:05.719238  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722899  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722944  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.756814  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:50:05.765232  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:50:05.773915  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777582  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777627  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.811974  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:50:05.820369  124886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:50:05.824309  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:50:05.858210  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:50:05.892122  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:50:05.926997  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:50:05.961508  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:50:05.996031  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:50:06.030615  124886 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:06.030703  124886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:50:06.030782  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.059591  124886 cri.go:89] found id: ""
	I1008 14:50:06.059641  124886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:50:06.068127  124886 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:50:06.068151  124886 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:50:06.068205  124886 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:50:06.076226  124886 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.076725  124886 kubeconfig.go:125] found "functional-367186" server: "https://192.168.49.2:8441"
	I1008 14:50:06.077896  124886 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:50:06.086029  124886 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-08 14:35:34.873718023 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-08 14:50:05.341579042 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1008 14:50:06.086044  124886 kubeadm.go:1160] stopping kube-system containers ...
	I1008 14:50:06.086056  124886 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 14:50:06.086094  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.113178  124886 cri.go:89] found id: ""
	I1008 14:50:06.113245  124886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 14:50:06.155234  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:50:06.163592  124886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  8 14:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  8 14:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Oct  8 14:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  8 14:39 /etc/kubernetes/scheduler.conf
	
	I1008 14:50:06.163642  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:50:06.171483  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:50:06.179293  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.179397  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:50:06.186779  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.194154  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.194203  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.201651  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:50:06.209487  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.209530  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:50:06.217108  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:50:06.224828  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:06.265674  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.277477  124886 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011762147s)
	I1008 14:50:07.277533  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.443820  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.494457  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.547380  124886 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:50:07.547460  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.047610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.547636  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.047603  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.548254  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.047862  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.548513  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.048225  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.548074  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.048566  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.548179  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.047805  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.548258  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.048373  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.047544  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.548496  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.048492  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.548115  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.548277  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.047671  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.048049  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.547809  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.047855  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.547915  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.048015  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.547746  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.048353  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.548289  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.048071  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.547643  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.047912  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.548519  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.047801  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.547748  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.048322  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.548153  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.047657  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.547721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.047652  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.047871  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.548380  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.047959  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.548581  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.047957  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.547650  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.048117  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.547561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.048296  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.547881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.047870  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.548272  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.548487  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.047562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.547999  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.048398  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.547939  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.048434  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.547918  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.048433  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.548054  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.048329  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.548100  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.047697  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.548386  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.047561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.548546  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.048286  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.547793  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.048077  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.547717  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.048220  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.548251  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.047634  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.548172  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.048591  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.548428  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.048515  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.547901  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.048572  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.548237  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.047859  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.548570  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.047742  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.548274  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.047802  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.548510  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.047998  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.547560  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.047723  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.547955  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.048562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.547549  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.047984  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.547945  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.048426  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.547582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.048058  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.548196  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.048582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.548046  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.047563  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.047699  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.547610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.048374  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.548211  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.048533  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.548306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:07.548386  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:07.574942  124886 cri.go:89] found id: ""
	I1008 14:51:07.574974  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.574982  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:07.574988  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:07.575052  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:07.600942  124886 cri.go:89] found id: ""
	I1008 14:51:07.600957  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.600964  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:07.600968  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:07.601020  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:07.627307  124886 cri.go:89] found id: ""
	I1008 14:51:07.627324  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.627331  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:07.627336  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:07.627388  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:07.653908  124886 cri.go:89] found id: ""
	I1008 14:51:07.653925  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.653933  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:07.653938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:07.653988  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:07.681787  124886 cri.go:89] found id: ""
	I1008 14:51:07.681806  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.681814  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:07.681818  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:07.681881  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:07.707870  124886 cri.go:89] found id: ""
	I1008 14:51:07.707886  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.707892  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:07.707898  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:07.707955  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:07.734640  124886 cri.go:89] found id: ""
	I1008 14:51:07.734655  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.734662  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:07.734673  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:07.734682  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:07.804699  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:07.804721  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:07.819273  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:07.819290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:07.875686  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:07.875696  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:07.875709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:07.940091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:07.940122  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:10.470645  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:10.481694  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:10.481739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:10.506817  124886 cri.go:89] found id: ""
	I1008 14:51:10.506832  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.506839  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:10.506843  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:10.506898  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:10.531484  124886 cri.go:89] found id: ""
	I1008 14:51:10.531499  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.531506  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:10.531511  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:10.531558  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:10.557249  124886 cri.go:89] found id: ""
	I1008 14:51:10.557268  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.557277  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:10.557282  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:10.557333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:10.582779  124886 cri.go:89] found id: ""
	I1008 14:51:10.582797  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.582833  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:10.582838  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:10.582908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:10.608584  124886 cri.go:89] found id: ""
	I1008 14:51:10.608599  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.608606  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:10.608610  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:10.608653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:10.634540  124886 cri.go:89] found id: ""
	I1008 14:51:10.634557  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.634567  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:10.634573  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:10.634635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:10.659510  124886 cri.go:89] found id: ""
	I1008 14:51:10.659526  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.659532  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:10.659541  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:10.659552  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:10.727322  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:10.727344  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:10.741862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:10.741882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:10.798339  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:10.798350  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:10.798362  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:10.862340  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:10.862363  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.392975  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:13.404098  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:13.404165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:13.430215  124886 cri.go:89] found id: ""
	I1008 14:51:13.430231  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.430237  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:13.430242  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:13.430283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:13.455821  124886 cri.go:89] found id: ""
	I1008 14:51:13.455837  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.455844  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:13.455853  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:13.455903  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:13.482279  124886 cri.go:89] found id: ""
	I1008 14:51:13.482296  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.482316  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:13.482321  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:13.482366  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:13.508868  124886 cri.go:89] found id: ""
	I1008 14:51:13.508883  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.508893  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:13.508900  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:13.508957  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:13.534938  124886 cri.go:89] found id: ""
	I1008 14:51:13.534954  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.534960  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:13.534964  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:13.535012  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:13.562594  124886 cri.go:89] found id: ""
	I1008 14:51:13.562611  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.562620  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:13.562626  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:13.562683  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:13.588476  124886 cri.go:89] found id: ""
	I1008 14:51:13.588493  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.588505  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:13.588513  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:13.588522  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.617969  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:13.617996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:13.687989  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:13.688010  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:13.702556  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:13.702577  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:13.758238  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:13.758274  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:13.758288  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.324420  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:16.335355  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:16.335413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:16.361211  124886 cri.go:89] found id: ""
	I1008 14:51:16.361227  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.361233  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:16.361238  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:16.361283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:16.388154  124886 cri.go:89] found id: ""
	I1008 14:51:16.388170  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.388176  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:16.388180  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:16.388234  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:16.414515  124886 cri.go:89] found id: ""
	I1008 14:51:16.414532  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.414539  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:16.414545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:16.414606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:16.441112  124886 cri.go:89] found id: ""
	I1008 14:51:16.441130  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.441137  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:16.441143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:16.441196  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:16.467403  124886 cri.go:89] found id: ""
	I1008 14:51:16.467423  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.467434  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:16.467439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:16.467515  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:16.493912  124886 cri.go:89] found id: ""
	I1008 14:51:16.493994  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.494017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:16.494025  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:16.494086  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:16.520736  124886 cri.go:89] found id: ""
	I1008 14:51:16.520754  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.520761  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:16.520770  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:16.520784  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:16.578205  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:16.578222  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:16.578237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.641639  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:16.641661  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:16.671073  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:16.671090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:16.740879  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:16.740901  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.256721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:19.267621  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:19.267671  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:19.293587  124886 cri.go:89] found id: ""
	I1008 14:51:19.293605  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.293611  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:19.293616  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:19.293661  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:19.318866  124886 cri.go:89] found id: ""
	I1008 14:51:19.318886  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.318898  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:19.318905  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:19.318973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:19.344646  124886 cri.go:89] found id: ""
	I1008 14:51:19.344660  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.344668  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:19.344673  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:19.344730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:19.370979  124886 cri.go:89] found id: ""
	I1008 14:51:19.370994  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.371001  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:19.371006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:19.371049  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:19.398115  124886 cri.go:89] found id: ""
	I1008 14:51:19.398134  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.398144  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:19.398149  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:19.398205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:19.425579  124886 cri.go:89] found id: ""
	I1008 14:51:19.425594  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.425602  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:19.425606  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:19.425664  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:19.451179  124886 cri.go:89] found id: ""
	I1008 14:51:19.451194  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.451201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:19.451209  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:19.451219  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:19.515409  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:19.515430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.530193  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:19.530208  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:19.587513  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:19.587527  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:19.587538  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:19.650244  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:19.650266  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:22.181221  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:22.192437  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:22.192530  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:22.218691  124886 cri.go:89] found id: ""
	I1008 14:51:22.218709  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.218717  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:22.218722  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:22.218784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:22.245011  124886 cri.go:89] found id: ""
	I1008 14:51:22.245028  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.245035  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:22.245040  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:22.245087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:22.271669  124886 cri.go:89] found id: ""
	I1008 14:51:22.271698  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.271706  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:22.271710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:22.271775  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:22.298500  124886 cri.go:89] found id: ""
	I1008 14:51:22.298520  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.298529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:22.298537  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:22.298598  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:22.324858  124886 cri.go:89] found id: ""
	I1008 14:51:22.324873  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.324879  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:22.324883  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:22.324930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:22.351540  124886 cri.go:89] found id: ""
	I1008 14:51:22.351556  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.351563  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:22.351568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:22.351613  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:22.377421  124886 cri.go:89] found id: ""
	I1008 14:51:22.377458  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.377470  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:22.377482  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:22.377497  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:22.450410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:22.450465  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:22.465230  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:22.465257  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:22.521387  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:22.521398  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:22.521409  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:22.586462  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:22.586490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.117667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:25.129264  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:25.129309  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:25.155977  124886 cri.go:89] found id: ""
	I1008 14:51:25.155998  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.156007  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:25.156016  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:25.156090  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:25.183268  124886 cri.go:89] found id: ""
	I1008 14:51:25.183288  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.183297  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:25.183302  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:25.183355  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:25.209728  124886 cri.go:89] found id: ""
	I1008 14:51:25.209745  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.209752  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:25.209763  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:25.209807  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:25.236946  124886 cri.go:89] found id: ""
	I1008 14:51:25.236961  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.236968  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:25.236974  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:25.237017  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:25.263116  124886 cri.go:89] found id: ""
	I1008 14:51:25.263132  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.263138  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:25.263143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:25.263189  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:25.288378  124886 cri.go:89] found id: ""
	I1008 14:51:25.288395  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.288401  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:25.288406  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:25.288460  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:25.315195  124886 cri.go:89] found id: ""
	I1008 14:51:25.315210  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.315217  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:25.315225  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:25.315237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:25.371376  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:25.371387  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:25.371396  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:25.435272  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:25.435294  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.465980  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:25.465996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:25.535450  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:25.535477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.050276  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:28.061620  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:28.061668  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:28.088245  124886 cri.go:89] found id: ""
	I1008 14:51:28.088265  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.088274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:28.088278  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:28.088326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:28.113839  124886 cri.go:89] found id: ""
	I1008 14:51:28.113859  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.113870  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:28.113876  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:28.113940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:28.141395  124886 cri.go:89] found id: ""
	I1008 14:51:28.141414  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.141423  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:28.141429  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:28.141503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:28.168333  124886 cri.go:89] found id: ""
	I1008 14:51:28.168348  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.168354  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:28.168360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:28.168413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:28.192847  124886 cri.go:89] found id: ""
	I1008 14:51:28.192864  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.192870  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:28.192876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:28.192936  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:28.218780  124886 cri.go:89] found id: ""
	I1008 14:51:28.218795  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.218801  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:28.218806  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:28.218875  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:28.244592  124886 cri.go:89] found id: ""
	I1008 14:51:28.244612  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.244622  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:28.244631  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:28.244643  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:28.315714  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:28.315736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.329938  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:28.329954  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:28.387618  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:28.387629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:28.387641  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:28.453202  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:28.453224  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:30.984664  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:30.995891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:30.995939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:31.022304  124886 cri.go:89] found id: ""
	I1008 14:51:31.022328  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.022338  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:31.022344  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:31.022401  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:31.049041  124886 cri.go:89] found id: ""
	I1008 14:51:31.049060  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.049069  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:31.049075  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:31.049123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:31.076924  124886 cri.go:89] found id: ""
	I1008 14:51:31.076940  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.076949  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:31.076953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:31.077003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:31.102922  124886 cri.go:89] found id: ""
	I1008 14:51:31.102942  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.102950  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:31.102955  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:31.103003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:31.131223  124886 cri.go:89] found id: ""
	I1008 14:51:31.131237  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.131244  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:31.131248  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:31.131294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:31.157335  124886 cri.go:89] found id: ""
	I1008 14:51:31.157350  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.157356  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:31.157361  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:31.157403  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:31.183539  124886 cri.go:89] found id: ""
	I1008 14:51:31.183556  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.183563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:31.183571  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:31.183582  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:31.254970  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:31.254991  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:31.269535  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:31.269556  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:31.325660  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:31.325690  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:31.325702  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:31.390180  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:31.390201  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:33.920121  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:33.931525  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:33.931580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:33.956578  124886 cri.go:89] found id: ""
	I1008 14:51:33.956594  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.956601  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:33.956606  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:33.956652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:33.983065  124886 cri.go:89] found id: ""
	I1008 14:51:33.983083  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.983094  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:33.983100  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:33.983176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:34.009180  124886 cri.go:89] found id: ""
	I1008 14:51:34.009198  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.009206  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:34.009211  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:34.009266  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:34.035120  124886 cri.go:89] found id: ""
	I1008 14:51:34.035138  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.035145  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:34.035151  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:34.035207  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:34.060490  124886 cri.go:89] found id: ""
	I1008 14:51:34.060506  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.060512  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:34.060517  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:34.060565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:34.086320  124886 cri.go:89] found id: ""
	I1008 14:51:34.086338  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.086346  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:34.086351  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:34.086394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:34.111862  124886 cri.go:89] found id: ""
	I1008 14:51:34.111883  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.111893  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:34.111902  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:34.111921  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:34.181743  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:34.181765  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:34.196152  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:34.196171  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:34.252034  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:34.252045  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:34.252056  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:34.316760  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:34.316781  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:36.845595  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:36.856603  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:36.856648  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:36.883175  124886 cri.go:89] found id: ""
	I1008 14:51:36.883194  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.883202  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:36.883209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:36.883267  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:36.910081  124886 cri.go:89] found id: ""
	I1008 14:51:36.910096  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.910103  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:36.910107  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:36.910157  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:36.935036  124886 cri.go:89] found id: ""
	I1008 14:51:36.935051  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.935062  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:36.935068  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:36.935122  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:36.961981  124886 cri.go:89] found id: ""
	I1008 14:51:36.961998  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.962009  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:36.962016  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:36.962126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:36.989270  124886 cri.go:89] found id: ""
	I1008 14:51:36.989290  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.989299  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:36.989306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:36.989363  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:37.016135  124886 cri.go:89] found id: ""
	I1008 14:51:37.016153  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.016161  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:37.016165  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:37.016215  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:37.043172  124886 cri.go:89] found id: ""
	I1008 14:51:37.043191  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.043201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:37.043211  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:37.043227  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:37.100326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:37.100338  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:37.100351  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:37.163756  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:37.163777  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:37.193435  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:37.193471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:37.260908  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:37.260933  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:39.777967  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:39.789007  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:39.789059  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:39.815862  124886 cri.go:89] found id: ""
	I1008 14:51:39.815879  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.815886  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:39.815890  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:39.815942  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:39.841950  124886 cri.go:89] found id: ""
	I1008 14:51:39.841966  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.841973  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:39.841979  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:39.842039  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:39.868668  124886 cri.go:89] found id: ""
	I1008 14:51:39.868686  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.868696  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:39.868702  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:39.868755  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:39.895534  124886 cri.go:89] found id: ""
	I1008 14:51:39.895554  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.895564  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:39.895571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:39.895622  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:39.922579  124886 cri.go:89] found id: ""
	I1008 14:51:39.922598  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.922608  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:39.922614  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:39.922660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:39.948340  124886 cri.go:89] found id: ""
	I1008 14:51:39.948356  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.948363  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:39.948367  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:39.948410  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:39.975730  124886 cri.go:89] found id: ""
	I1008 14:51:39.975746  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.975752  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:39.975761  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:39.975771  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:40.004995  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:40.005014  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:40.075523  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:40.075546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:40.090104  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:40.090120  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:40.147226  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:40.147238  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:40.147253  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:42.711983  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:42.723356  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:42.723413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:42.749822  124886 cri.go:89] found id: ""
	I1008 14:51:42.749838  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.749844  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:42.749849  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:42.749917  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:42.776397  124886 cri.go:89] found id: ""
	I1008 14:51:42.776414  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.776421  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:42.776425  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:42.776493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:42.802489  124886 cri.go:89] found id: ""
	I1008 14:51:42.802508  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.802518  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:42.802524  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:42.802572  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:42.829172  124886 cri.go:89] found id: ""
	I1008 14:51:42.829187  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.829193  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:42.829198  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:42.829251  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:42.853534  124886 cri.go:89] found id: ""
	I1008 14:51:42.853552  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.853561  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:42.853568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:42.853635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:42.879567  124886 cri.go:89] found id: ""
	I1008 14:51:42.879583  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.879595  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:42.879601  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:42.879652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:42.904961  124886 cri.go:89] found id: ""
	I1008 14:51:42.904979  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.904986  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:42.904993  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:42.905009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:42.974363  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:42.974384  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:42.989172  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:42.989192  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:43.045247  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:43.045260  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:43.045275  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:43.106406  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:43.106429  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:45.637311  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:45.648040  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:45.648095  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:45.673462  124886 cri.go:89] found id: ""
	I1008 14:51:45.673481  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.673491  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:45.673497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:45.673550  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:45.698163  124886 cri.go:89] found id: ""
	I1008 14:51:45.698181  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.698188  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:45.698193  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:45.698246  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:45.723467  124886 cri.go:89] found id: ""
	I1008 14:51:45.723561  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.723573  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:45.723581  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:45.723641  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:45.748702  124886 cri.go:89] found id: ""
	I1008 14:51:45.748717  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.748726  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:45.748732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:45.748796  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:45.775585  124886 cri.go:89] found id: ""
	I1008 14:51:45.775604  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.775612  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:45.775617  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:45.775670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:45.801010  124886 cri.go:89] found id: ""
	I1008 14:51:45.801025  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.801031  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:45.801036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:45.801084  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:45.827042  124886 cri.go:89] found id: ""
	I1008 14:51:45.827059  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.827067  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:45.827075  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:45.827086  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:45.895458  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:45.895480  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:45.910085  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:45.910109  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:45.966571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:45.966593  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:45.966605  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:46.027581  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:46.027606  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:48.557168  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:48.568079  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:48.568130  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:48.594574  124886 cri.go:89] found id: ""
	I1008 14:51:48.594594  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.594603  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:48.594609  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:48.594653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:48.621962  124886 cri.go:89] found id: ""
	I1008 14:51:48.621977  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.621984  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:48.621989  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:48.622035  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:48.648065  124886 cri.go:89] found id: ""
	I1008 14:51:48.648080  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.648087  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:48.648091  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:48.648146  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:48.675285  124886 cri.go:89] found id: ""
	I1008 14:51:48.675300  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.675307  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:48.675311  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:48.675356  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:48.701191  124886 cri.go:89] found id: ""
	I1008 14:51:48.701210  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.701218  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:48.701225  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:48.701271  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:48.729042  124886 cri.go:89] found id: ""
	I1008 14:51:48.729069  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.729079  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:48.729086  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:48.729136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:48.754548  124886 cri.go:89] found id: ""
	I1008 14:51:48.754564  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.754572  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:48.754580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:48.754590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:48.822673  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:48.822705  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:48.836997  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:48.837017  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:48.894196  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:48.894212  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:48.894223  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:48.955101  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:48.955127  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.487365  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:51.498554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:51.498603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:51.525066  124886 cri.go:89] found id: ""
	I1008 14:51:51.525081  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.525088  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:51.525094  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:51.525147  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:51.550909  124886 cri.go:89] found id: ""
	I1008 14:51:51.550926  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.550933  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:51.550938  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:51.550989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:51.576844  124886 cri.go:89] found id: ""
	I1008 14:51:51.576860  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.576867  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:51.576871  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:51.576919  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:51.603876  124886 cri.go:89] found id: ""
	I1008 14:51:51.603894  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.603900  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:51.603907  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:51.603958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:51.630518  124886 cri.go:89] found id: ""
	I1008 14:51:51.630533  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.630540  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:51.630545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:51.630591  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:51.656592  124886 cri.go:89] found id: ""
	I1008 14:51:51.656625  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.656634  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:51.656641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:51.656686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:51.682732  124886 cri.go:89] found id: ""
	I1008 14:51:51.682750  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.682757  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:51.682766  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:51.682775  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:51.742589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:51.742612  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.771353  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:51.771369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:51.842948  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:51.842971  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:51.857862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:51.857882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:51.915551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.417267  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:54.428273  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:54.428333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:54.454016  124886 cri.go:89] found id: ""
	I1008 14:51:54.454030  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.454037  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:54.454042  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:54.454097  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:54.479088  124886 cri.go:89] found id: ""
	I1008 14:51:54.479104  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.479112  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:54.479117  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:54.479171  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:54.504383  124886 cri.go:89] found id: ""
	I1008 14:51:54.504401  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.504411  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:54.504418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:54.504481  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:54.530502  124886 cri.go:89] found id: ""
	I1008 14:51:54.530522  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.530529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:54.530534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:54.530578  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:54.556899  124886 cri.go:89] found id: ""
	I1008 14:51:54.556920  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.556929  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:54.556935  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:54.556983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:54.582860  124886 cri.go:89] found id: ""
	I1008 14:51:54.582878  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.582888  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:54.582895  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:54.582954  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:54.609653  124886 cri.go:89] found id: ""
	I1008 14:51:54.609670  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.609679  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:54.609689  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:54.609704  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:54.666095  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.666106  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:54.666116  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:54.725670  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:54.725693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:54.755377  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:54.755394  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:54.824839  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:54.824860  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.340378  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:57.351013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:57.351087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:57.377174  124886 cri.go:89] found id: ""
	I1008 14:51:57.377192  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.377201  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:57.377208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:57.377259  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:57.403239  124886 cri.go:89] found id: ""
	I1008 14:51:57.403254  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.403261  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:57.403271  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:57.403317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:57.429149  124886 cri.go:89] found id: ""
	I1008 14:51:57.429168  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.429179  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:57.429185  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:57.429244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:57.454095  124886 cri.go:89] found id: ""
	I1008 14:51:57.454114  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.454128  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:57.454133  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:57.454187  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:57.479640  124886 cri.go:89] found id: ""
	I1008 14:51:57.479658  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.479665  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:57.479670  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:57.479725  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:57.505776  124886 cri.go:89] found id: ""
	I1008 14:51:57.505795  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.505805  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:57.505811  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:57.505853  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:57.531837  124886 cri.go:89] found id: ""
	I1008 14:51:57.531852  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.531860  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:57.531867  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:57.531878  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:57.599522  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:57.599544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.614111  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:57.614132  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:57.671063  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:57.671074  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:57.671084  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:57.732027  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:57.732050  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:00.263338  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:00.274100  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:00.274167  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:00.299677  124886 cri.go:89] found id: ""
	I1008 14:52:00.299692  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.299698  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:00.299703  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:00.299744  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:00.325037  124886 cri.go:89] found id: ""
	I1008 14:52:00.325055  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.325065  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:00.325071  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:00.325128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:00.351372  124886 cri.go:89] found id: ""
	I1008 14:52:00.351388  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.351397  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:00.351402  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:00.351465  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:00.377746  124886 cri.go:89] found id: ""
	I1008 14:52:00.377761  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.377767  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:00.377772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:00.377838  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:00.403806  124886 cri.go:89] found id: ""
	I1008 14:52:00.403821  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.403827  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:00.403832  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:00.403888  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:00.431653  124886 cri.go:89] found id: ""
	I1008 14:52:00.431673  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.431682  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:00.431687  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:00.431732  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:00.458706  124886 cri.go:89] found id: ""
	I1008 14:52:00.458720  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.458727  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:00.458735  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:00.458744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:00.527333  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:00.527355  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:00.545238  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:00.545260  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:00.604166  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:00.604178  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:00.604190  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:00.667338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:00.667360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.196993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:03.207677  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:03.207730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:03.232932  124886 cri.go:89] found id: ""
	I1008 14:52:03.232952  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.232963  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:03.232969  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:03.233019  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:03.257910  124886 cri.go:89] found id: ""
	I1008 14:52:03.257927  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.257934  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:03.257939  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:03.257989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:03.282476  124886 cri.go:89] found id: ""
	I1008 14:52:03.282491  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.282498  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:03.282503  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:03.282556  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:03.307994  124886 cri.go:89] found id: ""
	I1008 14:52:03.308009  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.308016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:03.308020  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:03.308066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:03.333961  124886 cri.go:89] found id: ""
	I1008 14:52:03.333978  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.333985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:03.333990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:03.334036  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:03.360461  124886 cri.go:89] found id: ""
	I1008 14:52:03.360480  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.360491  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:03.360498  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:03.360546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:03.385935  124886 cri.go:89] found id: ""
	I1008 14:52:03.385951  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.385958  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:03.385965  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:03.385980  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:03.399673  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:03.399689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:03.456423  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:03.456433  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:03.456459  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:03.519728  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:03.519750  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.549347  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:03.549365  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.121403  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:06.132277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:06.132329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:06.158234  124886 cri.go:89] found id: ""
	I1008 14:52:06.158248  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.158255  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:06.158260  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:06.158308  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:06.184118  124886 cri.go:89] found id: ""
	I1008 14:52:06.184136  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.184145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:06.184151  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:06.184201  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:06.210586  124886 cri.go:89] found id: ""
	I1008 14:52:06.210604  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.210613  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:06.210619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:06.210682  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:06.236986  124886 cri.go:89] found id: ""
	I1008 14:52:06.237004  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.237013  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:06.237018  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:06.237064  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:06.264151  124886 cri.go:89] found id: ""
	I1008 14:52:06.264172  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.264182  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:06.264188  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:06.264240  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:06.290106  124886 cri.go:89] found id: ""
	I1008 14:52:06.290120  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.290126  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:06.290132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:06.290177  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:06.316419  124886 cri.go:89] found id: ""
	I1008 14:52:06.316435  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.316453  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:06.316464  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:06.316477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:06.377522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:06.377544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:06.407056  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:06.407075  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.474318  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:06.474342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:06.488482  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:06.488502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:06.546904  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.048569  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:09.059380  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:09.059436  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:09.085888  124886 cri.go:89] found id: ""
	I1008 14:52:09.085906  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.085912  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:09.085918  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:09.085971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:09.113858  124886 cri.go:89] found id: ""
	I1008 14:52:09.113875  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.113882  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:09.113892  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:09.113939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:09.140388  124886 cri.go:89] found id: ""
	I1008 14:52:09.140407  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.140414  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:09.140420  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:09.140493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:09.168003  124886 cri.go:89] found id: ""
	I1008 14:52:09.168018  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.168025  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:09.168030  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:09.168075  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:09.194655  124886 cri.go:89] found id: ""
	I1008 14:52:09.194681  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.194690  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:09.194696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:09.194757  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:09.221388  124886 cri.go:89] found id: ""
	I1008 14:52:09.221405  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.221411  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:09.221416  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:09.221490  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:09.247075  124886 cri.go:89] found id: ""
	I1008 14:52:09.247093  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.247102  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:09.247122  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:09.247133  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:09.304638  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.304650  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:09.304664  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:09.368718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:09.368742  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:09.399217  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:09.399239  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:09.468608  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:09.468629  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:11.984769  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:11.995534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:11.995596  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:12.020218  124886 cri.go:89] found id: ""
	I1008 14:52:12.020234  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.020241  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:12.020247  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:12.020289  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:12.045959  124886 cri.go:89] found id: ""
	I1008 14:52:12.045978  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.045989  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:12.045996  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:12.046103  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:12.072101  124886 cri.go:89] found id: ""
	I1008 14:52:12.072118  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.072125  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:12.072129  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:12.072174  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:12.098793  124886 cri.go:89] found id: ""
	I1008 14:52:12.098808  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.098814  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:12.098819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:12.098871  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:12.124876  124886 cri.go:89] found id: ""
	I1008 14:52:12.124891  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.124900  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:12.124906  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:12.124973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:12.151678  124886 cri.go:89] found id: ""
	I1008 14:52:12.151695  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.151703  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:12.151708  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:12.151764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:12.176969  124886 cri.go:89] found id: ""
	I1008 14:52:12.176986  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.176994  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:12.177004  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:12.177019  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:12.247581  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:12.247604  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:12.262272  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:12.262290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:12.319283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:12.319306  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:12.319318  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:12.383384  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:12.383406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:14.914713  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:14.925495  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:14.925548  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:14.951182  124886 cri.go:89] found id: ""
	I1008 14:52:14.951197  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.951205  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:14.951209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:14.951265  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:14.978925  124886 cri.go:89] found id: ""
	I1008 14:52:14.978941  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.978948  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:14.978953  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:14.979004  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:15.003964  124886 cri.go:89] found id: ""
	I1008 14:52:15.003983  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.003992  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:15.003997  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:15.004061  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:15.030077  124886 cri.go:89] found id: ""
	I1008 14:52:15.030095  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.030102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:15.030107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:15.030154  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:15.055689  124886 cri.go:89] found id: ""
	I1008 14:52:15.055704  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.055711  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:15.055715  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:15.055760  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:15.081174  124886 cri.go:89] found id: ""
	I1008 14:52:15.081191  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.081198  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:15.081203  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:15.081262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:15.107235  124886 cri.go:89] found id: ""
	I1008 14:52:15.107251  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.107257  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:15.107265  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:15.107279  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:15.174130  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:15.174161  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:15.188435  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:15.188471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:15.244706  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:15.244720  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:15.244735  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:15.305071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:15.305098  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:17.835094  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:17.845787  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:17.845870  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:17.871734  124886 cri.go:89] found id: ""
	I1008 14:52:17.871749  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.871757  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:17.871764  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:17.871823  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:17.897412  124886 cri.go:89] found id: ""
	I1008 14:52:17.897433  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.897458  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:17.897467  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:17.897535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:17.925096  124886 cri.go:89] found id: ""
	I1008 14:52:17.925110  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.925117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:17.925122  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:17.925168  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:17.951272  124886 cri.go:89] found id: ""
	I1008 14:52:17.951289  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.951297  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:17.951301  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:17.951347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:17.976965  124886 cri.go:89] found id: ""
	I1008 14:52:17.976985  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.976992  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:17.976998  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:17.977042  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:18.003041  124886 cri.go:89] found id: ""
	I1008 14:52:18.003057  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.003064  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:18.003069  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:18.003113  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:18.028732  124886 cri.go:89] found id: ""
	I1008 14:52:18.028748  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.028756  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:18.028764  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:18.028774  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:18.092440  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:18.092467  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:18.121965  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:18.121984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:18.191653  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:18.191679  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:18.205820  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:18.205839  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:18.261002  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:20.762706  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:20.773592  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:20.773660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:20.799324  124886 cri.go:89] found id: ""
	I1008 14:52:20.799340  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.799347  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:20.799352  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:20.799394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:20.825415  124886 cri.go:89] found id: ""
	I1008 14:52:20.825430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.825436  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:20.825452  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:20.825504  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:20.851415  124886 cri.go:89] found id: ""
	I1008 14:52:20.851430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.851437  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:20.851454  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:20.851503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:20.878438  124886 cri.go:89] found id: ""
	I1008 14:52:20.878476  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.878484  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:20.878489  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:20.878536  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:20.903857  124886 cri.go:89] found id: ""
	I1008 14:52:20.903873  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.903884  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:20.903890  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:20.903948  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:20.930746  124886 cri.go:89] found id: ""
	I1008 14:52:20.930763  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.930770  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:20.930791  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:20.930842  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:20.956487  124886 cri.go:89] found id: ""
	I1008 14:52:20.956504  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.956510  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:20.956518  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:20.956528  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:21.026065  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:21.026087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:21.040112  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:21.040129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:21.095891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:21.095902  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:21.095914  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:21.159107  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:21.159129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:23.687668  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:23.698250  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:23.698317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:23.723805  124886 cri.go:89] found id: ""
	I1008 14:52:23.723832  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.723842  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:23.723850  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:23.723900  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:23.749813  124886 cri.go:89] found id: ""
	I1008 14:52:23.749831  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.749840  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:23.749847  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:23.749918  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:23.774918  124886 cri.go:89] found id: ""
	I1008 14:52:23.774934  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.774940  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:23.774945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:23.774999  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:23.800898  124886 cri.go:89] found id: ""
	I1008 14:52:23.800918  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.800925  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:23.800930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:23.800978  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:23.827330  124886 cri.go:89] found id: ""
	I1008 14:52:23.827348  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.827356  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:23.827360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:23.827405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:23.853485  124886 cri.go:89] found id: ""
	I1008 14:52:23.853503  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.853510  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:23.853515  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:23.853560  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:23.878936  124886 cri.go:89] found id: ""
	I1008 14:52:23.878957  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.878967  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:23.878976  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:23.878994  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:23.934831  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:23.934841  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:23.934851  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:23.993858  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:23.993885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:24.022945  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:24.022962  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:24.092836  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:24.092865  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.608369  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:26.619983  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:26.620060  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:26.646593  124886 cri.go:89] found id: ""
	I1008 14:52:26.646611  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.646621  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:26.646627  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:26.646678  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:26.673294  124886 cri.go:89] found id: ""
	I1008 14:52:26.673310  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.673317  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:26.673324  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:26.673367  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:26.699235  124886 cri.go:89] found id: ""
	I1008 14:52:26.699251  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.699257  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:26.699262  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:26.699320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:26.724993  124886 cri.go:89] found id: ""
	I1008 14:52:26.725009  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.725016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:26.725021  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:26.725074  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:26.749744  124886 cri.go:89] found id: ""
	I1008 14:52:26.749760  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.749767  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:26.749772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:26.749821  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:26.775226  124886 cri.go:89] found id: ""
	I1008 14:52:26.775246  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.775255  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:26.775260  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:26.775316  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:26.805104  124886 cri.go:89] found id: ""
	I1008 14:52:26.805120  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.805128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:26.805136  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:26.805152  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:26.834601  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:26.834618  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:26.900340  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:26.900361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.914389  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:26.914406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:26.969896  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:26.969911  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:26.969927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.531143  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:29.542884  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:29.542952  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:29.570323  124886 cri.go:89] found id: ""
	I1008 14:52:29.570339  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.570345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:29.570350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:29.570395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:29.596735  124886 cri.go:89] found id: ""
	I1008 14:52:29.596750  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.596756  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:29.596762  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:29.596811  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:29.622878  124886 cri.go:89] found id: ""
	I1008 14:52:29.622892  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.622898  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:29.622903  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:29.622950  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:29.648836  124886 cri.go:89] found id: ""
	I1008 14:52:29.648857  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.648880  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:29.648887  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:29.648939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:29.674729  124886 cri.go:89] found id: ""
	I1008 14:52:29.674747  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.674753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:29.674758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:29.674802  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:29.700542  124886 cri.go:89] found id: ""
	I1008 14:52:29.700558  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.700565  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:29.700571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:29.700615  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:29.726353  124886 cri.go:89] found id: ""
	I1008 14:52:29.726369  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.726375  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:29.726383  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:29.726395  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:29.790538  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:29.790560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:29.805071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:29.805087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:29.861336  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:29.861354  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:29.861367  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.921484  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:29.921507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.452001  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:32.462783  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:32.462839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:32.488895  124886 cri.go:89] found id: ""
	I1008 14:52:32.488913  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.488922  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:32.488929  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:32.488977  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:32.514655  124886 cri.go:89] found id: ""
	I1008 14:52:32.514674  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.514683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:32.514688  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:32.514739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:32.542007  124886 cri.go:89] found id: ""
	I1008 14:52:32.542027  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.542037  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:32.542044  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:32.542100  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:32.569946  124886 cri.go:89] found id: ""
	I1008 14:52:32.569963  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.569970  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:32.569976  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:32.570022  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:32.595032  124886 cri.go:89] found id: ""
	I1008 14:52:32.595051  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.595061  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:32.595066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:32.595127  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:32.621883  124886 cri.go:89] found id: ""
	I1008 14:52:32.621903  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.621923  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:32.621930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:32.621983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:32.647589  124886 cri.go:89] found id: ""
	I1008 14:52:32.647606  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.647612  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:32.647620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:32.647630  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:32.703098  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:32.703108  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:32.703129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:32.766481  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:32.766502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.794530  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:32.794546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:32.864662  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:32.864687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.381050  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:35.391807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:35.391868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:35.418369  124886 cri.go:89] found id: ""
	I1008 14:52:35.418388  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.418397  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:35.418402  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:35.418467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:35.444660  124886 cri.go:89] found id: ""
	I1008 14:52:35.444676  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.444683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:35.444687  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:35.444736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:35.471158  124886 cri.go:89] found id: ""
	I1008 14:52:35.471183  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.471190  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:35.471195  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:35.471238  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:35.496271  124886 cri.go:89] found id: ""
	I1008 14:52:35.496288  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.496295  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:35.496300  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:35.496345  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:35.521987  124886 cri.go:89] found id: ""
	I1008 14:52:35.522005  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.522015  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:35.522039  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:35.522098  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:35.547647  124886 cri.go:89] found id: ""
	I1008 14:52:35.547664  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.547673  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:35.547678  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:35.547723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:35.573056  124886 cri.go:89] found id: ""
	I1008 14:52:35.573075  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.573085  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:35.573109  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:35.573123  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:35.640898  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:35.640923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.655247  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:35.655265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:35.712555  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:35.712565  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:35.712575  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:35.772556  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:35.772579  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.301881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:38.312627  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:38.312694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:38.337192  124886 cri.go:89] found id: ""
	I1008 14:52:38.337210  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.337220  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:38.337227  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:38.337278  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:38.361703  124886 cri.go:89] found id: ""
	I1008 14:52:38.361721  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.361730  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:38.361736  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:38.361786  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:38.387263  124886 cri.go:89] found id: ""
	I1008 14:52:38.387279  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.387286  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:38.387290  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:38.387334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:38.413808  124886 cri.go:89] found id: ""
	I1008 14:52:38.413824  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.413830  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:38.413835  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:38.413880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:38.440014  124886 cri.go:89] found id: ""
	I1008 14:52:38.440029  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.440036  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:38.440041  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:38.440085  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:38.466144  124886 cri.go:89] found id: ""
	I1008 14:52:38.466164  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.466174  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:38.466181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:38.466229  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:38.491536  124886 cri.go:89] found id: ""
	I1008 14:52:38.491554  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.491563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:38.491573  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:38.491584  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.520248  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:38.520265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:38.588833  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:38.588861  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:38.603136  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:38.603155  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:38.659278  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:38.659290  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:38.659301  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.224716  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:41.235550  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:41.235600  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:41.261421  124886 cri.go:89] found id: ""
	I1008 14:52:41.261436  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.261455  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:41.261463  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:41.261516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:41.286798  124886 cri.go:89] found id: ""
	I1008 14:52:41.286813  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.286839  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:41.286844  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:41.286904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:41.312542  124886 cri.go:89] found id: ""
	I1008 14:52:41.312558  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.312567  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:41.312574  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:41.312623  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:41.339001  124886 cri.go:89] found id: ""
	I1008 14:52:41.339016  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.339022  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:41.339027  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:41.339073  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:41.365019  124886 cri.go:89] found id: ""
	I1008 14:52:41.365040  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.365049  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:41.365056  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:41.365115  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:41.389878  124886 cri.go:89] found id: ""
	I1008 14:52:41.389897  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.389904  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:41.389910  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:41.389960  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:41.415856  124886 cri.go:89] found id: ""
	I1008 14:52:41.415875  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.415884  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:41.415895  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:41.415909  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:41.481175  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:41.481196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:41.495356  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:41.495373  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:41.552891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:41.552910  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:41.552927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.615245  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:41.615282  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:44.146351  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:44.157234  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:44.157294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:44.183016  124886 cri.go:89] found id: ""
	I1008 14:52:44.183032  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.183039  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:44.183044  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:44.183094  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:44.209452  124886 cri.go:89] found id: ""
	I1008 14:52:44.209471  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.209480  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:44.209487  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:44.209535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:44.236057  124886 cri.go:89] found id: ""
	I1008 14:52:44.236079  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.236088  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:44.236094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:44.236165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:44.262249  124886 cri.go:89] found id: ""
	I1008 14:52:44.262265  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.262274  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:44.262281  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:44.262333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:44.288222  124886 cri.go:89] found id: ""
	I1008 14:52:44.288240  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.288249  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:44.288254  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:44.288303  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:44.312991  124886 cri.go:89] found id: ""
	I1008 14:52:44.313009  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.313017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:44.313022  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:44.313066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:44.338794  124886 cri.go:89] found id: ""
	I1008 14:52:44.338814  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.338823  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:44.338835  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:44.338849  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:44.408632  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:44.408655  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:44.423360  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:44.423381  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:44.481035  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:44.481052  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:44.481068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:44.545061  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:44.545093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.075772  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:47.086739  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:47.086782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:47.112465  124886 cri.go:89] found id: ""
	I1008 14:52:47.112483  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.112492  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:47.112497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:47.112546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:47.140124  124886 cri.go:89] found id: ""
	I1008 14:52:47.140139  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.140145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:47.140150  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:47.140194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:47.167347  124886 cri.go:89] found id: ""
	I1008 14:52:47.167366  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.167376  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:47.167382  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:47.167428  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:47.193008  124886 cri.go:89] found id: ""
	I1008 14:52:47.193025  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.193032  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:47.193037  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:47.193081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:47.218907  124886 cri.go:89] found id: ""
	I1008 14:52:47.218922  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.218932  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:47.218938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:47.218992  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:47.244390  124886 cri.go:89] found id: ""
	I1008 14:52:47.244406  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.244413  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:47.244418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:47.244485  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:47.270432  124886 cri.go:89] found id: ""
	I1008 14:52:47.270460  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.270473  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:47.270482  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:47.270496  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:47.284419  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:47.284434  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:47.340814  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:47.340829  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:47.340840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:47.405347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:47.405371  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.434675  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:47.434693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:50.001509  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:50.012521  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:50.012580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:50.038871  124886 cri.go:89] found id: ""
	I1008 14:52:50.038886  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.038895  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:50.038901  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:50.038945  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:50.065691  124886 cri.go:89] found id: ""
	I1008 14:52:50.065707  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.065713  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:50.065718  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:50.065764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:50.091421  124886 cri.go:89] found id: ""
	I1008 14:52:50.091439  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.091459  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:50.091466  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:50.091516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:50.117900  124886 cri.go:89] found id: ""
	I1008 14:52:50.117916  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.117922  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:50.117927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:50.117971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:50.143795  124886 cri.go:89] found id: ""
	I1008 14:52:50.143811  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.143837  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:50.143842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:50.143889  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:50.170009  124886 cri.go:89] found id: ""
	I1008 14:52:50.170025  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.170032  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:50.170036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:50.170081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:50.195182  124886 cri.go:89] found id: ""
	I1008 14:52:50.195198  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.195204  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:50.195213  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:50.195226  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:50.208906  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:50.208923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:50.263732  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:50.263744  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:50.263754  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:50.321967  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:50.321990  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:50.350825  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:50.350843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:52.919243  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:52.929975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:52.930069  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:52.956423  124886 cri.go:89] found id: ""
	I1008 14:52:52.956439  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.956463  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:52.956470  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:52.956519  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:52.982128  124886 cri.go:89] found id: ""
	I1008 14:52:52.982143  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.982150  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:52.982155  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:52.982204  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:53.008335  124886 cri.go:89] found id: ""
	I1008 14:52:53.008351  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.008358  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:53.008363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:53.008416  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:53.035683  124886 cri.go:89] found id: ""
	I1008 14:52:53.035698  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.035705  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:53.035710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:53.035753  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:53.061482  124886 cri.go:89] found id: ""
	I1008 14:52:53.061590  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.061610  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:53.061619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:53.061673  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:53.088358  124886 cri.go:89] found id: ""
	I1008 14:52:53.088375  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.088384  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:53.088390  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:53.088467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:53.113970  124886 cri.go:89] found id: ""
	I1008 14:52:53.113988  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.113995  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:53.114003  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:53.114016  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:53.181486  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:53.181511  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:53.195603  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:53.195620  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:53.251571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:53.251582  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:53.251592  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:53.312589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:53.312610  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:55.843180  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:55.854192  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:55.854250  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:55.878967  124886 cri.go:89] found id: ""
	I1008 14:52:55.878984  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.878992  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:55.878997  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:55.879050  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:55.904136  124886 cri.go:89] found id: ""
	I1008 14:52:55.904151  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.904157  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:55.904174  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:55.904216  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:55.928319  124886 cri.go:89] found id: ""
	I1008 14:52:55.928337  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.928348  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:55.928353  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:55.928406  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:55.955314  124886 cri.go:89] found id: ""
	I1008 14:52:55.955330  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.955338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:55.955345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:55.955405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:55.980957  124886 cri.go:89] found id: ""
	I1008 14:52:55.980976  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.980985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:55.980992  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:55.981040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:56.006492  124886 cri.go:89] found id: ""
	I1008 14:52:56.006507  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.006514  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:56.006519  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:56.006566  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:56.032919  124886 cri.go:89] found id: ""
	I1008 14:52:56.032934  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.032940  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:56.032948  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:56.032960  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:56.061693  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:56.061713  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:56.127262  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:56.127284  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:56.141728  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:56.141744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:56.197783  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:56.197799  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:56.197815  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:58.759309  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:58.770096  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:58.770150  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:58.796177  124886 cri.go:89] found id: ""
	I1008 14:52:58.796192  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.796199  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:58.796208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:58.796260  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:58.821988  124886 cri.go:89] found id: ""
	I1008 14:52:58.822006  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.822013  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:58.822018  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:58.822068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:58.847935  124886 cri.go:89] found id: ""
	I1008 14:52:58.847953  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.847961  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:58.847966  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:58.848015  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:58.874796  124886 cri.go:89] found id: ""
	I1008 14:52:58.874814  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.874821  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:58.874826  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:58.874880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:58.899925  124886 cri.go:89] found id: ""
	I1008 14:52:58.899941  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.899948  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:58.899953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:58.900008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:58.926934  124886 cri.go:89] found id: ""
	I1008 14:52:58.926950  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.926958  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:58.926963  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:58.927006  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:58.953664  124886 cri.go:89] found id: ""
	I1008 14:52:58.953680  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.953687  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:58.953694  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:58.953709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:59.010616  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:59.010629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:59.010640  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:59.071358  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:59.071382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:59.099863  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:59.099886  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:59.168071  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:59.168163  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.684667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:01.695456  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:01.695524  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:01.721627  124886 cri.go:89] found id: ""
	I1008 14:53:01.721644  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.721652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:01.721656  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:01.721715  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:01.748495  124886 cri.go:89] found id: ""
	I1008 14:53:01.748512  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.748518  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:01.748523  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:01.748583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:01.774281  124886 cri.go:89] found id: ""
	I1008 14:53:01.774298  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.774310  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:01.774316  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:01.774377  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:01.800414  124886 cri.go:89] found id: ""
	I1008 14:53:01.800430  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.800437  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:01.800458  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:01.800513  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:01.825727  124886 cri.go:89] found id: ""
	I1008 14:53:01.825746  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.825753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:01.825758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:01.825804  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:01.852777  124886 cri.go:89] found id: ""
	I1008 14:53:01.852794  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.852802  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:01.852807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:01.852855  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:01.879499  124886 cri.go:89] found id: ""
	I1008 14:53:01.879516  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.879522  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:01.879530  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:01.879542  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:01.908367  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:01.908386  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:01.976337  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:01.976358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.990844  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:01.990863  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:02.047840  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:02.047852  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:02.047864  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.612824  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:04.623886  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:04.623937  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:04.650245  124886 cri.go:89] found id: ""
	I1008 14:53:04.650265  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.650274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:04.650282  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:04.650338  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:04.675795  124886 cri.go:89] found id: ""
	I1008 14:53:04.675814  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.675849  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:04.675856  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:04.675910  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:04.701855  124886 cri.go:89] found id: ""
	I1008 14:53:04.701874  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.701883  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:04.701889  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:04.701951  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:04.727569  124886 cri.go:89] found id: ""
	I1008 14:53:04.727584  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.727590  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:04.727595  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:04.727637  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:04.753254  124886 cri.go:89] found id: ""
	I1008 14:53:04.753269  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.753276  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:04.753280  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:04.753329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:04.779529  124886 cri.go:89] found id: ""
	I1008 14:53:04.779548  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.779557  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:04.779564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:04.779611  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:04.806307  124886 cri.go:89] found id: ""
	I1008 14:53:04.806326  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.806335  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:04.806346  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:04.806361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:04.820357  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:04.820374  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:04.876718  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:04.876732  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:04.876748  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.940387  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:04.940412  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:04.969994  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:04.970009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.538422  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:07.550831  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:07.550884  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:07.577673  124886 cri.go:89] found id: ""
	I1008 14:53:07.577687  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.577693  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:07.577698  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:07.577750  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:07.603662  124886 cri.go:89] found id: ""
	I1008 14:53:07.603680  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.603695  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:07.603700  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:07.603746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:07.629802  124886 cri.go:89] found id: ""
	I1008 14:53:07.629821  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.629830  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:07.629834  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:07.629886  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:07.656081  124886 cri.go:89] found id: ""
	I1008 14:53:07.656096  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.656102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:07.656107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:07.656170  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:07.682162  124886 cri.go:89] found id: ""
	I1008 14:53:07.682177  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.682184  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:07.682189  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:07.682233  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:07.708617  124886 cri.go:89] found id: ""
	I1008 14:53:07.708635  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.708648  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:07.708653  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:07.708708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:07.734755  124886 cri.go:89] found id: ""
	I1008 14:53:07.734772  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.734782  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:07.734793  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:07.734807  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:07.794522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:07.794548  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:07.823563  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:07.823581  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.892786  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:07.892808  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:07.907262  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:07.907281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:07.962940  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.464656  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:10.476746  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:10.476800  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:10.502937  124886 cri.go:89] found id: ""
	I1008 14:53:10.502958  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.502968  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:10.502974  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:10.503025  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:10.529780  124886 cri.go:89] found id: ""
	I1008 14:53:10.529796  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.529803  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:10.529807  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:10.529856  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:10.556092  124886 cri.go:89] found id: ""
	I1008 14:53:10.556108  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.556117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:10.556124  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:10.556184  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:10.582264  124886 cri.go:89] found id: ""
	I1008 14:53:10.582281  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.582290  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:10.582296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:10.582354  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:10.608631  124886 cri.go:89] found id: ""
	I1008 14:53:10.608647  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.608655  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:10.608662  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:10.608721  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:10.635697  124886 cri.go:89] found id: ""
	I1008 14:53:10.635715  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.635725  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:10.635732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:10.635793  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:10.661998  124886 cri.go:89] found id: ""
	I1008 14:53:10.662018  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.662028  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:10.662040  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:10.662055  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:10.728096  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:10.728121  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:10.742521  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:10.742543  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:10.799551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.799566  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:10.799578  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:10.863614  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:10.863636  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.396084  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:13.407066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:13.407128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:13.433323  124886 cri.go:89] found id: ""
	I1008 14:53:13.433339  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.433345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:13.433350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:13.433393  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:13.460409  124886 cri.go:89] found id: ""
	I1008 14:53:13.460510  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.460522  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:13.460528  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:13.460589  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:13.487660  124886 cri.go:89] found id: ""
	I1008 14:53:13.487679  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.487689  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:13.487696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:13.487746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:13.515522  124886 cri.go:89] found id: ""
	I1008 14:53:13.515538  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.515546  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:13.515551  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:13.515595  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:13.540751  124886 cri.go:89] found id: ""
	I1008 14:53:13.540767  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.540773  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:13.540778  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:13.540846  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:13.566812  124886 cri.go:89] found id: ""
	I1008 14:53:13.566829  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.566837  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:13.566842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:13.566904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:13.593236  124886 cri.go:89] found id: ""
	I1008 14:53:13.593255  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.593262  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:13.593271  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:13.593281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:13.657627  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:13.657651  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.686303  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:13.686320  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:13.755568  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:13.755591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:13.769800  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:13.769819  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:13.826318  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:16.327013  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:16.337840  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:16.337908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:16.363203  124886 cri.go:89] found id: ""
	I1008 14:53:16.363221  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.363230  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:16.363235  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:16.363288  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:16.388535  124886 cri.go:89] found id: ""
	I1008 14:53:16.388551  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.388557  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:16.388563  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:16.388606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:16.414195  124886 cri.go:89] found id: ""
	I1008 14:53:16.414213  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.414221  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:16.414226  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:16.414274  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:16.440199  124886 cri.go:89] found id: ""
	I1008 14:53:16.440214  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.440221  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:16.440227  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:16.440283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:16.465899  124886 cri.go:89] found id: ""
	I1008 14:53:16.465918  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.465925  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:16.465931  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:16.465976  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:16.491135  124886 cri.go:89] found id: ""
	I1008 14:53:16.491151  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.491157  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:16.491162  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:16.491205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:16.517298  124886 cri.go:89] found id: ""
	I1008 14:53:16.517315  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.517323  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:16.517331  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:16.517342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:16.581777  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:16.581803  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:16.611824  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:16.611843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:16.679935  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:16.679957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:16.694087  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:16.694103  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:16.750382  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:19.252068  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:19.262927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:19.262980  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:19.288263  124886 cri.go:89] found id: ""
	I1008 14:53:19.288280  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.288286  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:19.288291  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:19.288334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:19.314749  124886 cri.go:89] found id: ""
	I1008 14:53:19.314769  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.314776  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:19.314781  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:19.314833  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:19.343105  124886 cri.go:89] found id: ""
	I1008 14:53:19.343124  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.343132  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:19.343137  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:19.343194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:19.369348  124886 cri.go:89] found id: ""
	I1008 14:53:19.369367  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.369376  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:19.369384  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:19.369438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:19.394541  124886 cri.go:89] found id: ""
	I1008 14:53:19.394556  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.394564  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:19.394569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:19.394617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:19.419883  124886 cri.go:89] found id: ""
	I1008 14:53:19.419900  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.419907  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:19.419911  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:19.419959  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:19.447316  124886 cri.go:89] found id: ""
	I1008 14:53:19.447332  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.447339  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:19.447347  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:19.447360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:19.509190  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:19.509213  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:19.538580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:19.538601  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:19.610379  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:19.610406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:19.625094  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:19.625115  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:19.682583  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:22.184381  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:22.195435  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:22.195496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:22.222530  124886 cri.go:89] found id: ""
	I1008 14:53:22.222549  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.222559  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:22.222565  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:22.222631  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:22.249103  124886 cri.go:89] found id: ""
	I1008 14:53:22.249118  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.249125  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:22.249130  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:22.249185  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:22.275859  124886 cri.go:89] found id: ""
	I1008 14:53:22.275877  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.275886  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:22.275891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:22.275944  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:22.301816  124886 cri.go:89] found id: ""
	I1008 14:53:22.301835  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.301845  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:22.301852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:22.301906  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:22.328795  124886 cri.go:89] found id: ""
	I1008 14:53:22.328810  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.328817  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:22.328821  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:22.328877  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:22.355119  124886 cri.go:89] found id: ""
	I1008 14:53:22.355134  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.355141  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:22.355146  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:22.355200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:22.382211  124886 cri.go:89] found id: ""
	I1008 14:53:22.382229  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.382238  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:22.382248  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:22.382262  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:22.442814  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:22.442840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:22.473721  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:22.473746  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:22.539788  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:22.539811  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:22.554277  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:22.554295  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:22.610102  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.110358  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:25.121359  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:25.121409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:25.146726  124886 cri.go:89] found id: ""
	I1008 14:53:25.146741  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.146747  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:25.146752  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:25.146797  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:25.173762  124886 cri.go:89] found id: ""
	I1008 14:53:25.173780  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.173788  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:25.173792  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:25.173839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:25.200613  124886 cri.go:89] found id: ""
	I1008 14:53:25.200630  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.200636  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:25.200641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:25.200686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:25.227307  124886 cri.go:89] found id: ""
	I1008 14:53:25.227327  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.227338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:25.227345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:25.227395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:25.253257  124886 cri.go:89] found id: ""
	I1008 14:53:25.253272  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.253278  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:25.253283  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:25.253329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:25.281060  124886 cri.go:89] found id: ""
	I1008 14:53:25.281077  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.281089  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:25.281094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:25.281140  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:25.306651  124886 cri.go:89] found id: ""
	I1008 14:53:25.306668  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.306678  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:25.306688  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:25.306699  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:25.373410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:25.373433  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:25.388282  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:25.388304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:25.445863  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.445874  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:25.445885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:25.510564  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:25.510590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.041417  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:28.052378  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:28.052432  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:28.078711  124886 cri.go:89] found id: ""
	I1008 14:53:28.078728  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.078734  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:28.078740  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:28.078782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:28.105010  124886 cri.go:89] found id: ""
	I1008 14:53:28.105025  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.105031  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:28.105036  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:28.105088  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:28.131983  124886 cri.go:89] found id: ""
	I1008 14:53:28.132001  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.132011  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:28.132017  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:28.132076  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:28.159135  124886 cri.go:89] found id: ""
	I1008 14:53:28.159153  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.159160  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:28.159166  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:28.159212  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:28.187793  124886 cri.go:89] found id: ""
	I1008 14:53:28.187811  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.187821  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:28.187827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:28.187872  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:28.214232  124886 cri.go:89] found id: ""
	I1008 14:53:28.214251  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.214265  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:28.214272  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:28.214335  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:28.240649  124886 cri.go:89] found id: ""
	I1008 14:53:28.240663  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.240669  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:28.240677  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:28.240687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:28.304071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:28.304094  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.333331  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:28.333346  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:28.401896  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:28.401919  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:28.416514  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:28.416531  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:28.472271  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:30.972553  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:30.983612  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:30.983666  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:31.011336  124886 cri.go:89] found id: ""
	I1008 14:53:31.011350  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.011357  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:31.011362  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:31.011405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:31.036913  124886 cri.go:89] found id: ""
	I1008 14:53:31.036935  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.036944  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:31.036948  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:31.037003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:31.063500  124886 cri.go:89] found id: ""
	I1008 14:53:31.063516  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.063523  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:31.063527  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:31.063582  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:31.091035  124886 cri.go:89] found id: ""
	I1008 14:53:31.091057  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.091066  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:31.091073  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:31.091123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:31.117295  124886 cri.go:89] found id: ""
	I1008 14:53:31.117310  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.117317  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:31.117322  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:31.117372  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:31.143795  124886 cri.go:89] found id: ""
	I1008 14:53:31.143810  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.143815  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:31.143820  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:31.143863  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:31.170134  124886 cri.go:89] found id: ""
	I1008 14:53:31.170150  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.170157  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:31.170164  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:31.170174  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:31.241300  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:31.241324  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:31.255637  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:31.255656  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:31.312716  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:31.312725  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:31.312736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:31.377091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:31.377114  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:33.907080  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:33.918207  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:33.918262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:33.944092  124886 cri.go:89] found id: ""
	I1008 14:53:33.944111  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.944122  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:33.944129  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:33.944192  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:33.970271  124886 cri.go:89] found id: ""
	I1008 14:53:33.970286  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.970293  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:33.970298  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:33.970347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:33.996407  124886 cri.go:89] found id: ""
	I1008 14:53:33.996421  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.996427  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:33.996433  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:33.996503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:34.023513  124886 cri.go:89] found id: ""
	I1008 14:53:34.023533  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.023542  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:34.023549  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:34.023606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:34.050777  124886 cri.go:89] found id: ""
	I1008 14:53:34.050797  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.050807  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:34.050813  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:34.050868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:34.077691  124886 cri.go:89] found id: ""
	I1008 14:53:34.077710  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.077719  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:34.077724  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:34.077769  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:34.104354  124886 cri.go:89] found id: ""
	I1008 14:53:34.104373  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.104380  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:34.104388  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:34.104404  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:34.171873  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:34.171899  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:34.185891  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:34.185908  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:34.243162  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:34.243172  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:34.243185  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:34.306766  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:34.306791  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:36.836905  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:36.848013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:36.848068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:36.873912  124886 cri.go:89] found id: ""
	I1008 14:53:36.873930  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.873938  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:36.873944  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:36.873994  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:36.899859  124886 cri.go:89] found id: ""
	I1008 14:53:36.899875  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.899881  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:36.899886  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:36.899930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:36.926292  124886 cri.go:89] found id: ""
	I1008 14:53:36.926314  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.926321  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:36.926326  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:36.926370  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:36.952172  124886 cri.go:89] found id: ""
	I1008 14:53:36.952189  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.952196  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:36.952201  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:36.952248  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:36.978525  124886 cri.go:89] found id: ""
	I1008 14:53:36.978542  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.978548  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:36.978553  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:36.978605  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:37.005955  124886 cri.go:89] found id: ""
	I1008 14:53:37.005973  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.005984  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:37.005990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:37.006037  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:37.032282  124886 cri.go:89] found id: ""
	I1008 14:53:37.032300  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.032310  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:37.032320  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:37.032336  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:37.100471  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:37.100507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:37.114707  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:37.114727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:37.173117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:37.173128  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:37.173138  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:37.237613  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:37.237637  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:39.769167  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:39.780181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:39.780239  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:39.805900  124886 cri.go:89] found id: ""
	I1008 14:53:39.805921  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.805928  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:39.805935  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:39.805982  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:39.832463  124886 cri.go:89] found id: ""
	I1008 14:53:39.832485  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.832493  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:39.832501  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:39.832565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:39.859105  124886 cri.go:89] found id: ""
	I1008 14:53:39.859120  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.859127  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:39.859132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:39.859176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:39.885372  124886 cri.go:89] found id: ""
	I1008 14:53:39.885395  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.885402  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:39.885410  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:39.885476  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:39.911669  124886 cri.go:89] found id: ""
	I1008 14:53:39.911684  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.911691  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:39.911696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:39.911743  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:39.939236  124886 cri.go:89] found id: ""
	I1008 14:53:39.939254  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.939263  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:39.939269  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:39.939329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:39.967816  124886 cri.go:89] found id: ""
	I1008 14:53:39.967833  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.967839  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:39.967847  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:39.967859  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:39.982071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:39.982090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:40.038524  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:40.038545  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:40.038560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:40.099347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:40.099369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:40.128637  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:40.128654  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.700345  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:42.711170  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:42.711224  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:42.738404  124886 cri.go:89] found id: ""
	I1008 14:53:42.738420  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.738426  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:42.738431  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:42.738496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:42.765170  124886 cri.go:89] found id: ""
	I1008 14:53:42.765185  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.765192  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:42.765196  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:42.765244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:42.790844  124886 cri.go:89] found id: ""
	I1008 14:53:42.790862  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.790870  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:42.790876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:42.790920  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:42.817749  124886 cri.go:89] found id: ""
	I1008 14:53:42.817765  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.817772  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:42.817777  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:42.817826  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:42.844796  124886 cri.go:89] found id: ""
	I1008 14:53:42.844815  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.844823  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:42.844827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:42.844882  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:42.870976  124886 cri.go:89] found id: ""
	I1008 14:53:42.870993  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.871001  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:42.871006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:42.871051  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:42.897679  124886 cri.go:89] found id: ""
	I1008 14:53:42.897698  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.897707  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:42.897716  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:42.897727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.967720  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:42.967744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:42.981967  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:42.981984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:43.039728  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:43.039742  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:43.039753  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:43.101886  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:43.101911  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:45.635598  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:45.646564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:45.646617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:45.673775  124886 cri.go:89] found id: ""
	I1008 14:53:45.673791  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.673797  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:45.673802  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:45.673845  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:45.700610  124886 cri.go:89] found id: ""
	I1008 14:53:45.700627  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.700633  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:45.700638  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:45.700694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:45.726636  124886 cri.go:89] found id: ""
	I1008 14:53:45.726653  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.726662  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:45.726669  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:45.726723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:45.753352  124886 cri.go:89] found id: ""
	I1008 14:53:45.753367  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.753374  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:45.753379  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:45.753434  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:45.780250  124886 cri.go:89] found id: ""
	I1008 14:53:45.780266  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.780272  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:45.780277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:45.780326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:45.805847  124886 cri.go:89] found id: ""
	I1008 14:53:45.805863  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.805870  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:45.805875  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:45.805940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:45.832274  124886 cri.go:89] found id: ""
	I1008 14:53:45.832290  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.832297  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:45.832304  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:45.832315  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:45.901895  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:45.901925  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:45.916420  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:45.916438  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:45.972937  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:45.972948  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:45.972958  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:46.034817  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:46.034841  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.564993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:48.576052  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:48.576102  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:48.602007  124886 cri.go:89] found id: ""
	I1008 14:53:48.602024  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.602031  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:48.602035  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:48.602080  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:48.628143  124886 cri.go:89] found id: ""
	I1008 14:53:48.628160  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.628168  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:48.628173  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:48.628218  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:48.655880  124886 cri.go:89] found id: ""
	I1008 14:53:48.655898  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.655907  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:48.655913  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:48.655958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:48.683255  124886 cri.go:89] found id: ""
	I1008 14:53:48.683270  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.683278  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:48.683284  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:48.683337  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:48.709473  124886 cri.go:89] found id: ""
	I1008 14:53:48.709492  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.709501  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:48.709508  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:48.709567  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:48.736246  124886 cri.go:89] found id: ""
	I1008 14:53:48.736268  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.736274  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:48.736279  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:48.736327  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:48.763463  124886 cri.go:89] found id: ""
	I1008 14:53:48.763483  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.763493  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:48.763503  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:48.763518  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.792359  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:48.792378  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:48.859056  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:48.859077  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:48.873385  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:48.873405  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:48.931065  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:48.931075  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:48.931087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:51.494941  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:51.505819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:51.505869  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:51.533622  124886 cri.go:89] found id: ""
	I1008 14:53:51.533643  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.533652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:51.533659  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:51.533707  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:51.560499  124886 cri.go:89] found id: ""
	I1008 14:53:51.560519  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.560528  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:51.560536  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:51.560584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:51.587541  124886 cri.go:89] found id: ""
	I1008 14:53:51.587556  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.587564  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:51.587569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:51.587616  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:51.614266  124886 cri.go:89] found id: ""
	I1008 14:53:51.614284  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.614291  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:51.614296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:51.614343  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:51.639614  124886 cri.go:89] found id: ""
	I1008 14:53:51.639632  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.639641  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:51.639649  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:51.639708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:51.667306  124886 cri.go:89] found id: ""
	I1008 14:53:51.667322  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.667328  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:51.667333  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:51.667375  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:51.692160  124886 cri.go:89] found id: ""
	I1008 14:53:51.692175  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.692182  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:51.692191  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:51.692204  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:51.720341  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:51.720358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:51.785600  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:51.785622  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:51.800298  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:51.800317  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:51.857283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:51.857293  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:51.857304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:54.424673  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:54.435975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:54.436023  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:54.462429  124886 cri.go:89] found id: ""
	I1008 14:53:54.462462  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.462472  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:54.462479  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:54.462528  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:54.489261  124886 cri.go:89] found id: ""
	I1008 14:53:54.489276  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.489284  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:54.489289  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:54.489344  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:54.514962  124886 cri.go:89] found id: ""
	I1008 14:53:54.514980  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.514990  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:54.514996  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:54.515040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:54.541414  124886 cri.go:89] found id: ""
	I1008 14:53:54.541428  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.541435  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:54.541439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:54.541501  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:54.567913  124886 cri.go:89] found id: ""
	I1008 14:53:54.567931  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.567940  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:54.567945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:54.568008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:54.594492  124886 cri.go:89] found id: ""
	I1008 14:53:54.594511  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.594522  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:54.594528  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:54.594583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:54.621305  124886 cri.go:89] found id: ""
	I1008 14:53:54.621321  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.621330  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:54.621338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:54.621348  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:54.648627  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:54.648645  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:54.717360  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:54.717382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:54.731905  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:54.731923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:54.788630  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:54.788640  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:54.788650  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.353718  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:57.365518  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:57.365570  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:57.391621  124886 cri.go:89] found id: ""
	I1008 14:53:57.391638  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.391646  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:57.391650  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:57.391704  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:57.419557  124886 cri.go:89] found id: ""
	I1008 14:53:57.419574  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.419582  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:57.419587  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:57.419643  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:57.447029  124886 cri.go:89] found id: ""
	I1008 14:53:57.447047  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.447059  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:57.447077  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:57.447126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:57.473391  124886 cri.go:89] found id: ""
	I1008 14:53:57.473410  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.473418  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:57.473423  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:57.473494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:57.499437  124886 cri.go:89] found id: ""
	I1008 14:53:57.499472  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.499481  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:57.499486  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:57.499542  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:57.525753  124886 cri.go:89] found id: ""
	I1008 14:53:57.525770  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.525776  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:57.525782  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:57.525827  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:57.555506  124886 cri.go:89] found id: ""
	I1008 14:53:57.555523  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.555529  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:57.555539  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:57.555553  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:57.623045  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:57.623068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:57.637620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:57.637638  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:57.695326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:57.695339  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:57.695356  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.755685  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:57.755710  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:00.285648  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:00.296554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:00.296603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:00.322379  124886 cri.go:89] found id: ""
	I1008 14:54:00.322396  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.322405  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:00.322409  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:00.322474  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:00.349397  124886 cri.go:89] found id: ""
	I1008 14:54:00.349414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.349423  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:00.349429  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:00.349507  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:00.375588  124886 cri.go:89] found id: ""
	I1008 14:54:00.375602  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.375608  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:00.375613  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:00.375670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:00.401398  124886 cri.go:89] found id: ""
	I1008 14:54:00.401414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.401420  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:00.401426  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:00.401494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:00.427652  124886 cri.go:89] found id: ""
	I1008 14:54:00.427668  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.427675  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:00.427680  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:00.427736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:00.451896  124886 cri.go:89] found id: ""
	I1008 14:54:00.451911  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.451918  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:00.451923  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:00.451967  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:00.478107  124886 cri.go:89] found id: ""
	I1008 14:54:00.478122  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.478128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:00.478135  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:00.478145  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:00.547950  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:00.547974  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:00.561968  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:00.561986  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:00.618117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:00.618131  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:00.618141  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:00.683464  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:00.683490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.211808  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:03.222618  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:03.222667  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:03.248716  124886 cri.go:89] found id: ""
	I1008 14:54:03.248732  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.248738  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:03.248742  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:03.248784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:03.275183  124886 cri.go:89] found id: ""
	I1008 14:54:03.275202  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.275209  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:03.275214  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:03.275262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:03.301882  124886 cri.go:89] found id: ""
	I1008 14:54:03.301909  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.301915  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:03.301920  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:03.301966  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:03.328783  124886 cri.go:89] found id: ""
	I1008 14:54:03.328799  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.328811  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:03.328817  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:03.328864  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:03.355235  124886 cri.go:89] found id: ""
	I1008 14:54:03.355251  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.355259  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:03.355268  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:03.355313  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:03.382286  124886 cri.go:89] found id: ""
	I1008 14:54:03.382305  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.382313  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:03.382318  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:03.382371  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:03.408682  124886 cri.go:89] found id: ""
	I1008 14:54:03.408700  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.408708  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:03.408718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:03.408732  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.438177  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:03.438196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:03.507859  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:03.507881  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:03.523723  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:03.523747  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:03.580407  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:03.580418  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:03.580430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.142863  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:06.153852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:06.153912  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:06.180234  124886 cri.go:89] found id: ""
	I1008 14:54:06.180253  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.180264  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:06.180271  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:06.180320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:06.207080  124886 cri.go:89] found id: ""
	I1008 14:54:06.207094  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.207101  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:06.207106  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:06.207152  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:06.232369  124886 cri.go:89] found id: ""
	I1008 14:54:06.232384  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.232390  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:06.232394  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:06.232438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:06.257360  124886 cri.go:89] found id: ""
	I1008 14:54:06.257376  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.257383  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:06.257388  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:06.257433  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:06.284487  124886 cri.go:89] found id: ""
	I1008 14:54:06.284507  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.284516  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:06.284523  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:06.284584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:06.310846  124886 cri.go:89] found id: ""
	I1008 14:54:06.310863  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.310874  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:06.310882  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:06.310935  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:06.337095  124886 cri.go:89] found id: ""
	I1008 14:54:06.337114  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.337121  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:06.337130  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:06.337142  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:06.406561  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:06.406591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:06.421066  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:06.421088  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:06.477926  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:06.477943  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:06.477957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.538516  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:06.538537  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:09.071758  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:09.082621  124886 kubeadm.go:601] duration metric: took 4m3.01446136s to restartPrimaryControlPlane
	W1008 14:54:09.082718  124886 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 14:54:09.082774  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:54:09.534098  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:54:09.546894  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:54:09.555065  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:54:09.555116  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:54:09.563122  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:54:09.563134  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:54:09.563181  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:54:09.571418  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:54:09.571492  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:54:09.579061  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:54:09.587199  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:54:09.587244  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:54:09.594420  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.602223  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:54:09.602263  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.609598  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:54:09.616978  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:54:09.617035  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:54:09.624225  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:54:09.679083  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:54:09.736432  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:58:12.118648  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:58:12.118737  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:58:12.121564  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:58:12.121611  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:58:12.121691  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:58:12.121739  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:58:12.121768  124886 kubeadm.go:318] OS: Linux
	I1008 14:58:12.121805  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:58:12.121846  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:58:12.121885  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:58:12.121936  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:58:12.121975  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:58:12.122056  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:58:12.122130  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:58:12.122194  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:58:12.122280  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:58:12.122381  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:58:12.122523  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:58:12.122608  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:58:12.124721  124886 out.go:252]   - Generating certificates and keys ...
	I1008 14:58:12.124815  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:58:12.124880  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:58:12.124964  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:58:12.125031  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:58:12.125148  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:58:12.125193  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:58:12.125282  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:58:12.125362  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:58:12.125490  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:58:12.125594  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:58:12.125626  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:58:12.125673  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:58:12.125714  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:58:12.125760  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:58:12.125802  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:58:12.125857  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:58:12.125902  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:58:12.125971  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:58:12.126032  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:58:12.128152  124886 out.go:252]   - Booting up control plane ...
	I1008 14:58:12.128237  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:58:12.128300  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:58:12.128371  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:58:12.128508  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:58:12.128583  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:58:12.128689  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:58:12.128762  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:58:12.128794  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:58:12.128904  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:58:12.128993  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:58:12.129038  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.0016053s
	I1008 14:58:12.129115  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:58:12.129187  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:58:12.129304  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:58:12.129408  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:58:12.129490  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	I1008 14:58:12.129546  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	I1008 14:58:12.129607  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	I1008 14:58:12.129609  124886 kubeadm.go:318] 
	I1008 14:58:12.129696  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:58:12.129765  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:58:12.129857  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:58:12.129935  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:58:12.129999  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:58:12.130073  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:58:12.130125  124886 kubeadm.go:318] 
	W1008 14:58:12.130230  124886 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.0016053s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:58:12.130328  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:58:12.582965  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:58:12.596265  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:58:12.596310  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:58:12.604829  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:58:12.604840  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:58:12.604880  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:58:12.613146  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:58:12.613253  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:58:12.621163  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:58:12.629390  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:58:12.629433  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:58:12.637274  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.645831  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:58:12.645886  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.653972  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:58:12.662348  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:58:12.662392  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:58:12.670230  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:58:12.730328  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:58:12.789898  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:02:14.463875  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 15:02:14.464082  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:02:14.466966  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:02:14.467026  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:02:14.467112  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:02:14.467156  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:02:14.467184  124886 kubeadm.go:318] OS: Linux
	I1008 15:02:14.467232  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:02:14.467270  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:02:14.467309  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:02:14.467348  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:02:14.467386  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:02:14.467424  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:02:14.467494  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:02:14.467536  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:02:14.467596  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:02:14.467693  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:02:14.467779  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:02:14.467827  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:02:14.470599  124886 out.go:252]   - Generating certificates and keys ...
	I1008 15:02:14.470674  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:02:14.470757  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:02:14.470867  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:02:14.470954  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:02:14.471017  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:02:14.471091  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:02:14.471148  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:02:14.471198  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:02:14.471289  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:02:14.471353  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:02:14.471382  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:02:14.471424  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:02:14.471487  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:02:14.471529  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:02:14.471569  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:02:14.471615  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:02:14.471657  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:02:14.471734  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:02:14.471802  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:02:14.473075  124886 out.go:252]   - Booting up control plane ...
	I1008 15:02:14.473133  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:02:14.473209  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:02:14.473257  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:02:14.473356  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:02:14.473436  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:02:14.473538  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:02:14.473606  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:02:14.473637  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:02:14.473747  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:02:14.473833  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:02:14.473877  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.93866ms
	I1008 15:02:14.473950  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:02:14.474013  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 15:02:14.474094  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:02:14.474159  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:02:14.474228  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	I1008 15:02:14.474292  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	I1008 15:02:14.474371  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	I1008 15:02:14.474380  124886 kubeadm.go:318] 
	I1008 15:02:14.474476  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:02:14.474542  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:02:14.474617  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:02:14.474713  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:02:14.474773  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:02:14.474854  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:02:14.474900  124886 kubeadm.go:318] 
	I1008 15:02:14.474937  124886 kubeadm.go:402] duration metric: took 12m8.444330692s to StartCluster
	I1008 15:02:14.474986  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:02:14.475048  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:02:14.503050  124886 cri.go:89] found id: ""
	I1008 15:02:14.503067  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.503076  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:02:14.503082  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:02:14.503136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:02:14.530120  124886 cri.go:89] found id: ""
	I1008 15:02:14.530138  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.530145  124886 logs.go:284] No container was found matching "etcd"
	I1008 15:02:14.530149  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:02:14.530200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:02:14.555892  124886 cri.go:89] found id: ""
	I1008 15:02:14.555909  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.555916  124886 logs.go:284] No container was found matching "coredns"
	I1008 15:02:14.555921  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:02:14.555972  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:02:14.583336  124886 cri.go:89] found id: ""
	I1008 15:02:14.583351  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.583358  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:02:14.583363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:02:14.583409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:02:14.611139  124886 cri.go:89] found id: ""
	I1008 15:02:14.611160  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.611169  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:02:14.611175  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:02:14.611227  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:02:14.639405  124886 cri.go:89] found id: ""
	I1008 15:02:14.639422  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.639429  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:02:14.639434  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:02:14.639496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:02:14.666049  124886 cri.go:89] found id: ""
	I1008 15:02:14.666066  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.666073  124886 logs.go:284] No container was found matching "kindnet"
	I1008 15:02:14.666082  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:02:14.666093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:02:14.729847  124886 logs.go:123] Gathering logs for container status ...
	I1008 15:02:14.729877  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 15:02:14.760743  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 15:02:14.760761  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:02:14.827532  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 15:02:14.827555  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:02:14.842256  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:02:14.842273  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:02:14.900360  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1008 15:02:14.900380  124886 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:02:14.900418  124886 out.go:285] * 
	W1008 15:02:14.900560  124886 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.900582  124886 out.go:285] * 
	W1008 15:02:14.902936  124886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:02:14.906609  124886 out.go:203] 
	W1008 15:02:14.908139  124886 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.908172  124886 out.go:285] * 
	I1008 15:02:14.910356  124886 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.235970167Z" level=info msg="createCtr: removing container 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.236014435Z" level=info msg="createCtr: deleting container 4622624887c51d17a4f48b4c114309c458af133e7e5f0ebb6e52f32925508ca2 from storage" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:12 functional-367186 crio[5841]: time="2025-10-08T15:02:12.238146031Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-367186_kube-system_72fbb4fed11a83b82d196f480544c561_0" id=c2e85893-b694-4b31-a6c9-751ad38634be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.213078537Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=54863201-7b39-4ed4-ab14-0d41c1a7c865 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.21401263Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=aea1d193-b8b9-4b9f-b6bb-340acce60e77 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.214965671Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.215222603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.218562955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.218978786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.240788352Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242470926Z" level=info msg="createCtr: deleting container ID 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee from idIndex" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242521147Z" level=info msg="createCtr: removing container 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.242570796Z" level=info msg="createCtr: deleting container 74e76ea0be473823a1f4de85ec2a196ce0ba8adf7a8a0fc53bc9efa28a03acee from storage" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:13 functional-367186 crio[5841]: time="2025-10-08T15:02:13.244732312Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=7341d9dd-962b-40df-89cd-c06224df7115 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.212073438Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ca8f7879-e326-4639-a9ef-c0c1dfa414a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.213119818Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ecb74a96-577c-422d-a45c-95595f453ce8 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.214207343Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.214530133Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.220777224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.221381192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.235426916Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.236981196Z" level=info msg="createCtr: deleting container ID 2613a0d4a3380b900751d682e8322989397f568ab578ee7ffa4f599a27aa571c from idIndex" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.237047841Z" level=info msg="createCtr: removing container 2613a0d4a3380b900751d682e8322989397f568ab578ee7ffa4f599a27aa571c" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.23711764Z" level=info msg="createCtr: deleting container 2613a0d4a3380b900751d682e8322989397f568ab578ee7ffa4f599a27aa571c from storage" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.240485039Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:23.671749   16598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:23.672901   16598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:23.673494   16598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:23.675176   16598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:23.675639   16598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:23 up  2:44,  0 user,  load average: 1.03, 0.25, 0.29
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:12 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:12 functional-367186 kubelet[14967]: E1008 15:02:12.238621   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.212513   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245058   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:13 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:13 functional-367186 kubelet[14967]:  > podSandboxID="49d755d590c1e6c75fffb26df4018ef3af1ece9b6aef63dbe754f59f467146f3"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245169   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:13 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:13 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:13 functional-367186 kubelet[14967]: E1008 15:02:13.245209   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:14 functional-367186 kubelet[14967]: E1008 15:02:14.233845   14967 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 15:02:16 functional-367186 kubelet[14967]: E1008 15:02:16.045402   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	Oct 08 15:02:17 functional-367186 kubelet[14967]: E1008 15:02:17.036703   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 08 15:02:17 functional-367186 kubelet[14967]: E1008 15:02:17.835695   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: I1008 15:02:18.001053   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.001494   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.211634   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.240870   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:18 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:18 functional-367186 kubelet[14967]:  > podSandboxID="6ab3169b39f563ff749bb50d5d8d7a3bb62a9ced39a9d97f82c3acd85f61e1c9"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.240973   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:18 functional-367186 kubelet[14967]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:18 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.241004   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	Oct 08 15:02:23 functional-367186 kubelet[14967]: E1008 15:02:23.277935   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (370.27986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-367186 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-367186 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.463731ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-367186 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-367186 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-367186 describe po hello-node-connect: exit status 1 (53.8516ms)

                                                
                                                
** stderr ** 
	E1008 15:02:31.454933  147154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.455313  147154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.456936  147154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.457296  147154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.458819  147154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-367186 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-367186 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-367186 logs -l app=hello-node-connect: exit status 1 (59.626194ms)

                                                
                                                
** stderr ** 
	E1008 15:02:31.514006  147183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.514433  147183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.516983  147183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.517315  147183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-367186 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-367186 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-367186 describe svc hello-node-connect: exit status 1 (54.629284ms)

                                                
                                                
** stderr ** 
	E1008 15:02:31.568554  147238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.569285  147238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.570194  147238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.571664  147238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:31.571972  147238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-367186 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (313.153922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ tunnel    │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount     │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount3 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount     │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount1 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount     │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount2 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh       │ functional-367186 ssh findmnt -T /mount1                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image     │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ tunnel    │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh       │ functional-367186 ssh findmnt -T /mount1                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh       │ functional-367186 ssh findmnt -T /mount2                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh       │ functional-367186 ssh findmnt -T /mount3                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ mount     │ -p functional-367186 --kill=true                                                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh       │ functional-367186 ssh sudo cat /etc/test/nested/copy/98900/hosts                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image save kicbase/echo-server:functional-367186 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image rm kicbase/echo-server:functional-367186 --alsologtostderr                                                                              │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image     │ functional-367186 image save --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ start     │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start     │ -p functional-367186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start     │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ addons    │ functional-367186 addons list                                                                                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ addons    │ functional-367186 addons list -o json                                                                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ dashboard │ --url --port 36195 -p functional-367186 --alsologtostderr -v=1                                                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:02:31
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:02:31.228491  146984 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:02:31.228757  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.228769  146984 out.go:374] Setting ErrFile to fd 2...
	I1008 15:02:31.228775  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.229092  146984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:02:31.229608  146984 out.go:368] Setting JSON to false
	I1008 15:02:31.230544  146984 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9902,"bootTime":1759925849,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:02:31.230642  146984 start.go:141] virtualization: kvm guest
	I1008 15:02:31.232608  146984 out.go:179] * [functional-367186] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1008 15:02:31.234774  146984 notify.go:220] Checking for updates...
	I1008 15:02:31.234788  146984 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:02:31.236372  146984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:02:31.237980  146984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:02:31.239532  146984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:02:31.240888  146984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:02:31.242413  146984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:02:31.244247  146984 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:31.244801  146984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:02:31.271217  146984 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:02:31.271332  146984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:02:31.337074  146984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:31.325606098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:02:31.337200  146984 docker.go:318] overlay module found
	I1008 15:02:31.339135  146984 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1008 15:02:31.340433  146984 start.go:305] selected driver: docker
	I1008 15:02:31.340459  146984 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:02:31.340589  146984 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:02:31.342564  146984 out.go:203] 
	W1008 15:02:31.343899  146984 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 15:02:31.345192  146984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.025485551Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=a4f09100-a89a-48dc-89f1-535c556a80a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.052496663Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.052654742Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.05273608Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.078814601Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.078975874Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.079026616Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.876199233Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=bcb6792f-0817-4dec-aab1-936038b6e1e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.905821555Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.905973538Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.906015096Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934168176Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934313118Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934355764Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.212253616Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=780ea47b-00a3-4ad3-b471-044379f619e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.213350577Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=01f8a6fe-79dc-475e-8a32-44eb2d1fe360 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.21442001Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.214709008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.219408546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.219977147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.239166175Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.240977302Z" level=info msg="createCtr: deleting container ID a219ed28be0ddb1a6676ee003827b02a69726eedbf9e940367e177ee7ac71a98 from idIndex" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.241018868Z" level=info msg="createCtr: removing container a219ed28be0ddb1a6676ee003827b02a69726eedbf9e940367e177ee7ac71a98" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.241056209Z" level=info msg="createCtr: deleting container a219ed28be0ddb1a6676ee003827b02a69726eedbf9e940367e177ee7ac71a98 from storage" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:32 functional-367186 crio[5841]: time="2025-10-08T15:02:32.243658308Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=e63065cd-ce0f-4340-b8c3-e2eb07d5ac7f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:32.519938   17968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.520439   17968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.522060   17968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.522550   17968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:32.524163   17968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:32 up  2:45,  0 user,  load average: 1.32, 0.34, 0.32
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:25 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:25 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.252072   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.046948   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.212548   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244164   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:26 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:26 functional-367186 kubelet[14967]:  > podSandboxID="e484b96b426485f7bb73491a3eadb180f53489ac5744f9f22e7d4f5f26a4a47a"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244294   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:26 functional-367186 kubelet[14967]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:26 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244335   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:02:29 functional-367186 kubelet[14967]: E1008 15:02:29.115019   14967 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 08 15:02:29 functional-367186 kubelet[14967]: E1008 15:02:29.438217   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 08 15:02:31 functional-367186 kubelet[14967]: E1008 15:02:31.838233   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: I1008 15:02:32.009938   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.010822   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.211773   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.244004   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:32 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:32 functional-367186 kubelet[14967]:  > podSandboxID="6ab3169b39f563ff749bb50d5d8d7a3bb62a9ced39a9d97f82c3acd85f61e1c9"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.244152   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:32 functional-367186 kubelet[14967]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:32 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:32 functional-367186 kubelet[14967]: E1008 15:02:32.244200   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (310.277409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1008 15:02:36.496226   98900 retry.go:31] will retry after 5.896778922s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1008 15:02:42.394138   98900 retry.go:31] will retry after 7.64258857s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1008 15:02:50.037840   98900 retry.go:31] will retry after 28.52276129s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1008 15:03:18.561226   98900 retry.go:31] will retry after 20.40188045s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (310.508994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (300.265671ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-367186 ssh findmnt -T /mount3                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ mount          │ -p functional-367186 --kill=true                                                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh            │ functional-367186 ssh sudo cat /etc/test/nested/copy/98900/hosts                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image save kicbase/echo-server:functional-367186 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image rm kicbase/echo-server:functional-367186 --alsologtostderr                                                                              │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image save --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ start          │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start          │ -p functional-367186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start          │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ addons         │ functional-367186 addons list                                                                                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ addons         │ functional-367186 addons list -o json                                                                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ dashboard      │ --url --port 36195 -p functional-367186 --alsologtostderr -v=1                                                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ update-context │ functional-367186 update-context --alsologtostderr -v=2                                                                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ update-context │ functional-367186 update-context --alsologtostderr -v=2                                                                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ update-context │ functional-367186 update-context --alsologtostderr -v=2                                                                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls --format short --alsologtostderr                                                                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh            │ functional-367186 ssh pgrep buildkitd                                                                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image          │ functional-367186 image ls --format yaml --alsologtostderr                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls --format json --alsologtostderr                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls --format table --alsologtostderr                                                                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:02:31
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:02:31.228491  146984 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:02:31.228757  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.228769  146984 out.go:374] Setting ErrFile to fd 2...
	I1008 15:02:31.228775  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.229092  146984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:02:31.229608  146984 out.go:368] Setting JSON to false
	I1008 15:02:31.230544  146984 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9902,"bootTime":1759925849,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:02:31.230642  146984 start.go:141] virtualization: kvm guest
	I1008 15:02:31.232608  146984 out.go:179] * [functional-367186] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1008 15:02:31.234774  146984 notify.go:220] Checking for updates...
	I1008 15:02:31.234788  146984 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:02:31.236372  146984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:02:31.237980  146984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:02:31.239532  146984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:02:31.240888  146984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:02:31.242413  146984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:02:31.244247  146984 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:31.244801  146984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:02:31.271217  146984 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:02:31.271332  146984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:02:31.337074  146984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:31.325606098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:02:31.337200  146984 docker.go:318] overlay module found
	I1008 15:02:31.339135  146984 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1008 15:02:31.340433  146984 start.go:305] selected driver: docker
	I1008 15:02:31.340459  146984 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:02:31.340589  146984 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:02:31.342564  146984 out.go:203] 
	W1008 15:02:31.343899  146984 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 15:02:31.345192  146984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:06:22 functional-367186 crio[5841]: time="2025-10-08T15:06:22.239525594Z" level=info msg="createCtr: removing container 829bad243527d538cd1913f3e749952a2e1440918707465d1922ce2783e94809" id=a0cc1ac6-af4b-4ea6-9cb2-bd6c3cc3a759 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:22 functional-367186 crio[5841]: time="2025-10-08T15:06:22.239563078Z" level=info msg="createCtr: deleting container 829bad243527d538cd1913f3e749952a2e1440918707465d1922ce2783e94809 from storage" id=a0cc1ac6-af4b-4ea6-9cb2-bd6c3cc3a759 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:22 functional-367186 crio[5841]: time="2025-10-08T15:06:22.241977705Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=a0cc1ac6-af4b-4ea6-9cb2-bd6c3cc3a759 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.212395467Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=618e1d58-ca55-446e-b725-aeb5a003983b name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.212437318Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=2921d8d6-3dea-4095-955a-edbaf9d1f886 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.213314532Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=51025116-5da0-48df-b01c-fad53f31b705 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.21335641Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=48c8e7a3-98b7-4288-ba23-e5e5d90111be name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.214304233Z" level=info msg="Creating container: kube-system/etcd-functional-367186/etcd" id=dd1bedd4-9009-432c-be52-f7a235c36031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.214340502Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-367186/kube-scheduler" id=5aedaa72-0c47-4890-859c-d6860f6f1cab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.21453844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.214548686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.218881021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.219295722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.220415608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.22096319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.237077016Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5aedaa72-0c47-4890-859c-d6860f6f1cab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.237726468Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dd1bedd4-9009-432c-be52-f7a235c36031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.238648334Z" level=info msg="createCtr: deleting container ID 08997d3869e10ed5849706e2c729acc95fb41c2a53da2c109f903f8cf4675167 from idIndex" id=5aedaa72-0c47-4890-859c-d6860f6f1cab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.238684384Z" level=info msg="createCtr: removing container 08997d3869e10ed5849706e2c729acc95fb41c2a53da2c109f903f8cf4675167" id=5aedaa72-0c47-4890-859c-d6860f6f1cab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.238716531Z" level=info msg="createCtr: deleting container 08997d3869e10ed5849706e2c729acc95fb41c2a53da2c109f903f8cf4675167 from storage" id=5aedaa72-0c47-4890-859c-d6860f6f1cab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.239201728Z" level=info msg="createCtr: deleting container ID f7a2cf66b4a1ebe436f9dd28abd7f4dfb87762b425f54013ad20000ca19b725d from idIndex" id=dd1bedd4-9009-432c-be52-f7a235c36031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.239239197Z" level=info msg="createCtr: removing container f7a2cf66b4a1ebe436f9dd28abd7f4dfb87762b425f54013ad20000ca19b725d" id=dd1bedd4-9009-432c-be52-f7a235c36031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.239270131Z" level=info msg="createCtr: deleting container f7a2cf66b4a1ebe436f9dd28abd7f4dfb87762b425f54013ad20000ca19b725d from storage" id=dd1bedd4-9009-432c-be52-f7a235c36031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.242304778Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-367186_kube-system_72fbb4fed11a83b82d196f480544c561_0" id=5aedaa72-0c47-4890-859c-d6860f6f1cab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:06:23 functional-367186 crio[5841]: time="2025-10-08T15:06:23.242687644Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=dd1bedd4-9009-432c-be52-f7a235c36031 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:06:28.106609   19215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:06:28.107312   19215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:06:28.109079   19215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:06:28.109620   19215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:06:28.111158   19215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:06:28 up  2:48,  0 user,  load average: 0.03, 0.17, 0.25
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:06:22 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:06:22 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:06:22 functional-367186 kubelet[14967]: E1008 15:06:22.242527   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:06:22 functional-367186 kubelet[14967]: E1008 15:06:22.877985   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: I1008 15:06:23.081589   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.082002   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.211897   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.211941   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.242721   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:06:23 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:06:23 functional-367186 kubelet[14967]:  > podSandboxID="e484b96b426485f7bb73491a3eadb180f53489ac5744f9f22e7d4f5f26a4a47a"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.242839   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:06:23 functional-367186 kubelet[14967]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:06:23 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.242908   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.242986   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:06:23 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:06:23 functional-367186 kubelet[14967]:  > podSandboxID="6ab3169b39f563ff749bb50d5d8d7a3bb62a9ced39a9d97f82c3acd85f61e1c9"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.243123   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:06:23 functional-367186 kubelet[14967]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:06:23 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:06:23 functional-367186 kubelet[14967]: E1008 15:06:23.244281   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	Oct 08 15:06:24 functional-367186 kubelet[14967]: E1008 15:06:24.033508   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-367186.186c8c01e7d95228\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d95228  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-367186 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207656488 +0000 UTC m=+0.248865054,LastTimestamp:2025-10-08 14:58:14.209148813 +0000 UTC m=+0.250357377,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-367186,}"
	Oct 08 15:06:24 functional-367186 kubelet[14967]: E1008 15:06:24.251101   14967 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 15:06:27 functional-367186 kubelet[14967]: E1008 15:06:27.704542   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (303.005031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-367186 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-367186 replace --force -f testdata/mysql.yaml: exit status 1 (51.428904ms)

                                                
                                                
** stderr ** 
	E1008 15:02:29.187573  145965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:29.188052  145965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-367186 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (306.015035ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-367186 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo cat /etc/ssl/certs/989002.pem                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo cat /usr/share/ca-certificates/989002.pem                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh -- ls -la /mount-9p                                                                                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ tunnel  │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ tunnel  │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount   │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount3 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount   │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount1 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount   │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount2 --alsologtostderr -v=1                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh findmnt -T /mount1                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image   │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ tunnel  │ functional-367186 tunnel --alsologtostderr                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh findmnt -T /mount1                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh findmnt -T /mount2                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh findmnt -T /mount3                                                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ mount   │ -p functional-367186 --kill=true                                                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh sudo cat /etc/test/nested/copy/98900/hosts                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image save kicbase/echo-server:functional-367186 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:50:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:50:02.487614  124886 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:50:02.487885  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.487890  124886 out.go:374] Setting ErrFile to fd 2...
	I1008 14:50:02.487894  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.488148  124886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:50:02.488703  124886 out.go:368] Setting JSON to false
	I1008 14:50:02.489732  124886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9153,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:50:02.489824  124886 start.go:141] virtualization: kvm guest
	I1008 14:50:02.491855  124886 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:50:02.493271  124886 notify.go:220] Checking for updates...
	I1008 14:50:02.493279  124886 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:50:02.494598  124886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:50:02.495836  124886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:50:02.497242  124886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:50:02.498624  124886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:50:02.499973  124886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:50:02.501897  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:02.502018  124886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:50:02.525193  124886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:50:02.525315  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.584022  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.573926988 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.584110  124886 docker.go:318] overlay module found
	I1008 14:50:02.585968  124886 out.go:179] * Using the docker driver based on existing profile
	I1008 14:50:02.587279  124886 start.go:305] selected driver: docker
	I1008 14:50:02.587288  124886 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.587409  124886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:50:02.587529  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.641632  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.631975419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.642294  124886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:50:02.642317  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:02.642374  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:02.642409  124886 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.644427  124886 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:50:02.645877  124886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:50:02.647092  124886 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:50:02.648224  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:02.648254  124886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:50:02.648262  124886 cache.go:58] Caching tarball of preloaded images
	I1008 14:50:02.648344  124886 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:50:02.648340  124886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:50:02.648350  124886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:50:02.648438  124886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:50:02.667989  124886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:50:02.668000  124886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:50:02.668014  124886 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:50:02.668041  124886 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:50:02.668096  124886 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "functional-367186"
	I1008 14:50:02.668109  124886 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:50:02.668113  124886 fix.go:54] fixHost starting: 
	I1008 14:50:02.668337  124886 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:50:02.684543  124886 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:50:02.684562  124886 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:50:02.686414  124886 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:50:02.686441  124886 machine.go:93] provisionDockerMachine start ...
	I1008 14:50:02.686552  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.704251  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.704482  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.704488  124886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:50:02.850612  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:02.850631  124886 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:50:02.850683  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.868208  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.868417  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.868424  124886 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:50:03.024186  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:03.024255  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.041071  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.041277  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.041288  124886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:50:03.186253  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:50:03.186270  124886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:50:03.186287  124886 ubuntu.go:190] setting up certificates
	I1008 14:50:03.186296  124886 provision.go:84] configureAuth start
	I1008 14:50:03.186366  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:03.203498  124886 provision.go:143] copyHostCerts
	I1008 14:50:03.203554  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:50:03.203567  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:50:03.203633  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:50:03.203728  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:50:03.203738  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:50:03.203764  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:50:03.203811  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:50:03.203815  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:50:03.203835  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:50:03.203891  124886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:50:03.342698  124886 provision.go:177] copyRemoteCerts
	I1008 14:50:03.342747  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:50:03.342789  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.359931  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.462754  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:50:03.480100  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:50:03.497218  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:50:03.514367  124886 provision.go:87] duration metric: took 328.059175ms to configureAuth
	I1008 14:50:03.514387  124886 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:50:03.514597  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:03.514714  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.531920  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.532136  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.532149  124886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:50:03.804333  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:50:03.804348  124886 machine.go:96] duration metric: took 1.117888769s to provisionDockerMachine
	I1008 14:50:03.804358  124886 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:50:03.804366  124886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:50:03.804425  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:50:03.804490  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.822222  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.925021  124886 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:50:03.928570  124886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:50:03.928586  124886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:50:03.928595  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:50:03.928648  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:50:03.928714  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:50:03.928776  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:50:03.928851  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:50:03.936383  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:03.953682  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:50:03.970665  124886 start.go:296] duration metric: took 166.291312ms for postStartSetup
	I1008 14:50:03.970729  124886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:03.970760  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.987625  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.086669  124886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:50:04.091298  124886 fix.go:56] duration metric: took 1.423178254s for fixHost
	I1008 14:50:04.091311  124886 start.go:83] releasing machines lock for "functional-367186", held for 1.423209484s
	I1008 14:50:04.091360  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:04.107787  124886 ssh_runner.go:195] Run: cat /version.json
	I1008 14:50:04.107823  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.107871  124886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:50:04.107944  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.125505  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.126027  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.277012  124886 ssh_runner.go:195] Run: systemctl --version
	I1008 14:50:04.283607  124886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:50:04.317281  124886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:50:04.322127  124886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:50:04.322186  124886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:50:04.329933  124886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:50:04.329948  124886 start.go:495] detecting cgroup driver to use...
	I1008 14:50:04.329985  124886 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:50:04.330037  124886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:50:04.344088  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:50:04.355897  124886 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:50:04.355934  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:50:04.370666  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:50:04.383061  124886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:50:04.469185  124886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:50:04.555865  124886 docker.go:234] disabling docker service ...
	I1008 14:50:04.555933  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:50:04.571649  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:50:04.585004  124886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:50:04.673830  124886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:50:04.762936  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:50:04.775689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:50:04.790127  124886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:50:04.790172  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.799414  124886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:50:04.799484  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.808366  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.816703  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.825175  124886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:50:04.833160  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.842121  124886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.850355  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.859028  124886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:50:04.866049  124886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:50:04.873109  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:04.955543  124886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:50:05.069798  124886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:50:05.069856  124886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:50:05.074109  124886 start.go:563] Will wait 60s for crictl version
	I1008 14:50:05.074171  124886 ssh_runner.go:195] Run: which crictl
	I1008 14:50:05.077741  124886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:50:05.103519  124886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:50:05.103581  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.131061  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.160549  124886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:50:05.161770  124886 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:50:05.178428  124886 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:50:05.184282  124886 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1008 14:50:05.185372  124886 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:50:05.185532  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:05.185581  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.219145  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.219157  124886 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:50:05.219203  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.244747  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.244760  124886 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:50:05.244766  124886 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:50:05.244868  124886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:50:05.244932  124886 ssh_runner.go:195] Run: crio config
	I1008 14:50:05.290552  124886 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1008 14:50:05.290627  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:05.290634  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:05.290643  124886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:50:05.290661  124886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:50:05.290774  124886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:50:05.290829  124886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:50:05.299112  124886 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:50:05.299181  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:50:05.307519  124886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:50:05.319796  124886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:50:05.331988  124886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1008 14:50:05.344225  124886 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:50:05.347910  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:05.434760  124886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:50:05.447481  124886 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:50:05.447496  124886 certs.go:195] generating shared ca certs ...
	I1008 14:50:05.447517  124886 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:50:05.447665  124886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:50:05.447699  124886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:50:05.447705  124886 certs.go:257] generating profile certs ...
	I1008 14:50:05.447783  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:50:05.447822  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:50:05.447852  124886 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:50:05.447956  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:50:05.447979  124886 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:50:05.447984  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:50:05.448004  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:50:05.448022  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:50:05.448039  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:50:05.448072  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:05.448723  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:50:05.466280  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:50:05.482753  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:50:05.499451  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:50:05.516010  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:50:05.532903  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:50:05.549460  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:50:05.566552  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:50:05.584248  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:50:05.601250  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:50:05.618600  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:50:05.636280  124886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:50:05.648959  124886 ssh_runner.go:195] Run: openssl version
	I1008 14:50:05.655372  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:50:05.664552  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668508  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668554  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.702319  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:50:05.710597  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:50:05.719238  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722899  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722944  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.756814  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:50:05.765232  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:50:05.773915  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777582  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777627  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.811974  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:50:05.820369  124886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:50:05.824309  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:50:05.858210  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:50:05.892122  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:50:05.926997  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:50:05.961508  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:50:05.996031  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:50:06.030615  124886 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:06.030703  124886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:50:06.030782  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.059591  124886 cri.go:89] found id: ""
	I1008 14:50:06.059641  124886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:50:06.068127  124886 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:50:06.068151  124886 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:50:06.068205  124886 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:50:06.076226  124886 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.076725  124886 kubeconfig.go:125] found "functional-367186" server: "https://192.168.49.2:8441"
	I1008 14:50:06.077896  124886 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:50:06.086029  124886 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-08 14:35:34.873718023 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-08 14:50:05.341579042 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1008 14:50:06.086044  124886 kubeadm.go:1160] stopping kube-system containers ...
	I1008 14:50:06.086056  124886 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 14:50:06.086094  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.113178  124886 cri.go:89] found id: ""
	I1008 14:50:06.113245  124886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 14:50:06.155234  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:50:06.163592  124886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  8 14:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  8 14:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Oct  8 14:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  8 14:39 /etc/kubernetes/scheduler.conf
	
	I1008 14:50:06.163642  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:50:06.171483  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:50:06.179293  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.179397  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:50:06.186779  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.194154  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.194203  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.201651  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:50:06.209487  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.209530  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:50:06.217108  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:50:06.224828  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:06.265674  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.277477  124886 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011762147s)
	I1008 14:50:07.277533  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.443820  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.494457  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.547380  124886 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:50:07.547460  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.047610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.547636  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.047603  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.548254  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.047862  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.548513  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.048225  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.548074  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.048566  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.548179  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.047805  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.548258  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.048373  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.047544  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.548496  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.048492  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.548115  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.548277  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.047671  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.048049  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.547809  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.047855  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.547915  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.048015  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.547746  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.048353  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.548289  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.048071  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.547643  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.047912  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.548519  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.047801  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.547748  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.048322  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.548153  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.047657  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.547721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.047652  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.047871  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.548380  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.047959  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.548581  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.047957  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.547650  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.048117  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.547561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.048296  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.547881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.047870  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.548272  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.548487  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.047562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.547999  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.048398  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.547939  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.048434  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.547918  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.048433  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.548054  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.048329  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.548100  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.047697  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.548386  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.047561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.548546  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.048286  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.547793  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.048077  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.547717  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.048220  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.548251  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.047634  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.548172  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.048591  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.548428  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.048515  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.547901  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.048572  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.548237  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.047859  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.548570  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.047742  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.548274  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.047802  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.548510  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.047998  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.547560  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.047723  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.547955  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.048562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.547549  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.047984  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.547945  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.048426  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.547582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.048058  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.548196  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.048582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.548046  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.047563  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.047699  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.547610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.048374  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.548211  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.048533  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.548306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:07.548386  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:07.574942  124886 cri.go:89] found id: ""
	I1008 14:51:07.574974  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.574982  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:07.574988  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:07.575052  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:07.600942  124886 cri.go:89] found id: ""
	I1008 14:51:07.600957  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.600964  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:07.600968  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:07.601020  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:07.627307  124886 cri.go:89] found id: ""
	I1008 14:51:07.627324  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.627331  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:07.627336  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:07.627388  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:07.653908  124886 cri.go:89] found id: ""
	I1008 14:51:07.653925  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.653933  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:07.653938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:07.653988  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:07.681787  124886 cri.go:89] found id: ""
	I1008 14:51:07.681806  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.681814  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:07.681818  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:07.681881  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:07.707870  124886 cri.go:89] found id: ""
	I1008 14:51:07.707886  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.707892  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:07.707898  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:07.707955  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:07.734640  124886 cri.go:89] found id: ""
	I1008 14:51:07.734655  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.734662  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:07.734673  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:07.734682  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:07.804699  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:07.804721  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:07.819273  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:07.819290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:07.875686  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:07.875696  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:07.875709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:07.940091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:07.940122  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:10.470645  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:10.481694  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:10.481739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:10.506817  124886 cri.go:89] found id: ""
	I1008 14:51:10.506832  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.506839  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:10.506843  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:10.506898  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:10.531484  124886 cri.go:89] found id: ""
	I1008 14:51:10.531499  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.531506  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:10.531511  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:10.531558  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:10.557249  124886 cri.go:89] found id: ""
	I1008 14:51:10.557268  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.557277  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:10.557282  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:10.557333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:10.582779  124886 cri.go:89] found id: ""
	I1008 14:51:10.582797  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.582833  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:10.582838  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:10.582908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:10.608584  124886 cri.go:89] found id: ""
	I1008 14:51:10.608599  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.608606  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:10.608610  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:10.608653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:10.634540  124886 cri.go:89] found id: ""
	I1008 14:51:10.634557  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.634567  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:10.634573  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:10.634635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:10.659510  124886 cri.go:89] found id: ""
	I1008 14:51:10.659526  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.659532  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:10.659541  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:10.659552  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:10.727322  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:10.727344  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:10.741862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:10.741882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:10.798339  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:10.798350  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:10.798362  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:10.862340  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:10.862363  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.392975  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:13.404098  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:13.404165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:13.430215  124886 cri.go:89] found id: ""
	I1008 14:51:13.430231  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.430237  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:13.430242  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:13.430283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:13.455821  124886 cri.go:89] found id: ""
	I1008 14:51:13.455837  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.455844  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:13.455853  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:13.455903  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:13.482279  124886 cri.go:89] found id: ""
	I1008 14:51:13.482296  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.482316  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:13.482321  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:13.482366  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:13.508868  124886 cri.go:89] found id: ""
	I1008 14:51:13.508883  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.508893  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:13.508900  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:13.508957  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:13.534938  124886 cri.go:89] found id: ""
	I1008 14:51:13.534954  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.534960  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:13.534964  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:13.535012  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:13.562594  124886 cri.go:89] found id: ""
	I1008 14:51:13.562611  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.562620  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:13.562626  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:13.562683  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:13.588476  124886 cri.go:89] found id: ""
	I1008 14:51:13.588493  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.588505  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:13.588513  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:13.588522  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.617969  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:13.617996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:13.687989  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:13.688010  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:13.702556  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:13.702577  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:13.758238  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:13.758274  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:13.758288  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.324420  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:16.335355  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:16.335413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:16.361211  124886 cri.go:89] found id: ""
	I1008 14:51:16.361227  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.361233  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:16.361238  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:16.361283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:16.388154  124886 cri.go:89] found id: ""
	I1008 14:51:16.388170  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.388176  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:16.388180  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:16.388234  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:16.414515  124886 cri.go:89] found id: ""
	I1008 14:51:16.414532  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.414539  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:16.414545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:16.414606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:16.441112  124886 cri.go:89] found id: ""
	I1008 14:51:16.441130  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.441137  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:16.441143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:16.441196  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:16.467403  124886 cri.go:89] found id: ""
	I1008 14:51:16.467423  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.467434  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:16.467439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:16.467515  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:16.493912  124886 cri.go:89] found id: ""
	I1008 14:51:16.493994  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.494017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:16.494025  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:16.494086  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:16.520736  124886 cri.go:89] found id: ""
	I1008 14:51:16.520754  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.520761  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:16.520770  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:16.520784  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:16.578205  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:16.578222  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:16.578237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.641639  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:16.641661  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:16.671073  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:16.671090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:16.740879  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:16.740901  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.256721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:19.267621  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:19.267671  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:19.293587  124886 cri.go:89] found id: ""
	I1008 14:51:19.293605  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.293611  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:19.293616  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:19.293661  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:19.318866  124886 cri.go:89] found id: ""
	I1008 14:51:19.318886  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.318898  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:19.318905  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:19.318973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:19.344646  124886 cri.go:89] found id: ""
	I1008 14:51:19.344660  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.344668  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:19.344673  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:19.344730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:19.370979  124886 cri.go:89] found id: ""
	I1008 14:51:19.370994  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.371001  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:19.371006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:19.371049  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:19.398115  124886 cri.go:89] found id: ""
	I1008 14:51:19.398134  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.398144  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:19.398149  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:19.398205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:19.425579  124886 cri.go:89] found id: ""
	I1008 14:51:19.425594  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.425602  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:19.425606  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:19.425664  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:19.451179  124886 cri.go:89] found id: ""
	I1008 14:51:19.451194  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.451201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:19.451209  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:19.451219  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:19.515409  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:19.515430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.530193  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:19.530208  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:19.587513  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:19.587527  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:19.587538  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:19.650244  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:19.650266  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:22.181221  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:22.192437  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:22.192530  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:22.218691  124886 cri.go:89] found id: ""
	I1008 14:51:22.218709  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.218717  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:22.218722  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:22.218784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:22.245011  124886 cri.go:89] found id: ""
	I1008 14:51:22.245028  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.245035  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:22.245040  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:22.245087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:22.271669  124886 cri.go:89] found id: ""
	I1008 14:51:22.271698  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.271706  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:22.271710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:22.271775  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:22.298500  124886 cri.go:89] found id: ""
	I1008 14:51:22.298520  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.298529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:22.298537  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:22.298598  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:22.324858  124886 cri.go:89] found id: ""
	I1008 14:51:22.324873  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.324879  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:22.324883  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:22.324930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:22.351540  124886 cri.go:89] found id: ""
	I1008 14:51:22.351556  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.351563  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:22.351568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:22.351613  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:22.377421  124886 cri.go:89] found id: ""
	I1008 14:51:22.377458  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.377470  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:22.377482  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:22.377497  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:22.450410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:22.450465  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:22.465230  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:22.465257  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:22.521387  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:22.521398  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:22.521409  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:22.586462  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:22.586490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.117667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:25.129264  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:25.129309  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:25.155977  124886 cri.go:89] found id: ""
	I1008 14:51:25.155998  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.156007  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:25.156016  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:25.156090  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:25.183268  124886 cri.go:89] found id: ""
	I1008 14:51:25.183288  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.183297  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:25.183302  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:25.183355  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:25.209728  124886 cri.go:89] found id: ""
	I1008 14:51:25.209745  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.209752  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:25.209763  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:25.209807  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:25.236946  124886 cri.go:89] found id: ""
	I1008 14:51:25.236961  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.236968  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:25.236974  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:25.237017  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:25.263116  124886 cri.go:89] found id: ""
	I1008 14:51:25.263132  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.263138  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:25.263143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:25.263189  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:25.288378  124886 cri.go:89] found id: ""
	I1008 14:51:25.288395  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.288401  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:25.288406  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:25.288460  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:25.315195  124886 cri.go:89] found id: ""
	I1008 14:51:25.315210  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.315217  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:25.315225  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:25.315237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:25.371376  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:25.371387  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:25.371396  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:25.435272  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:25.435294  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.465980  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:25.465996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:25.535450  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:25.535477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.050276  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:28.061620  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:28.061668  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:28.088245  124886 cri.go:89] found id: ""
	I1008 14:51:28.088265  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.088274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:28.088278  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:28.088326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:28.113839  124886 cri.go:89] found id: ""
	I1008 14:51:28.113859  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.113870  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:28.113876  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:28.113940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:28.141395  124886 cri.go:89] found id: ""
	I1008 14:51:28.141414  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.141423  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:28.141429  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:28.141503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:28.168333  124886 cri.go:89] found id: ""
	I1008 14:51:28.168348  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.168354  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:28.168360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:28.168413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:28.192847  124886 cri.go:89] found id: ""
	I1008 14:51:28.192864  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.192870  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:28.192876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:28.192936  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:28.218780  124886 cri.go:89] found id: ""
	I1008 14:51:28.218795  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.218801  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:28.218806  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:28.218875  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:28.244592  124886 cri.go:89] found id: ""
	I1008 14:51:28.244612  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.244622  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:28.244631  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:28.244643  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:28.315714  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:28.315736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.329938  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:28.329954  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:28.387618  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:28.387629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:28.387641  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:28.453202  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:28.453224  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:30.984664  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:30.995891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:30.995939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:31.022304  124886 cri.go:89] found id: ""
	I1008 14:51:31.022328  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.022338  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:31.022344  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:31.022401  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:31.049041  124886 cri.go:89] found id: ""
	I1008 14:51:31.049060  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.049069  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:31.049075  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:31.049123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:31.076924  124886 cri.go:89] found id: ""
	I1008 14:51:31.076940  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.076949  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:31.076953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:31.077003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:31.102922  124886 cri.go:89] found id: ""
	I1008 14:51:31.102942  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.102950  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:31.102955  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:31.103003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:31.131223  124886 cri.go:89] found id: ""
	I1008 14:51:31.131237  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.131244  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:31.131248  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:31.131294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:31.157335  124886 cri.go:89] found id: ""
	I1008 14:51:31.157350  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.157356  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:31.157361  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:31.157403  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:31.183539  124886 cri.go:89] found id: ""
	I1008 14:51:31.183556  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.183563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:31.183571  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:31.183582  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:31.254970  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:31.254991  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:31.269535  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:31.269556  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:31.325660  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:31.325690  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:31.325702  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:31.390180  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:31.390201  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:33.920121  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:33.931525  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:33.931580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:33.956578  124886 cri.go:89] found id: ""
	I1008 14:51:33.956594  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.956601  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:33.956606  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:33.956652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:33.983065  124886 cri.go:89] found id: ""
	I1008 14:51:33.983083  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.983094  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:33.983100  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:33.983176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:34.009180  124886 cri.go:89] found id: ""
	I1008 14:51:34.009198  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.009206  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:34.009211  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:34.009266  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:34.035120  124886 cri.go:89] found id: ""
	I1008 14:51:34.035138  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.035145  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:34.035151  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:34.035207  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:34.060490  124886 cri.go:89] found id: ""
	I1008 14:51:34.060506  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.060512  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:34.060517  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:34.060565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:34.086320  124886 cri.go:89] found id: ""
	I1008 14:51:34.086338  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.086346  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:34.086351  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:34.086394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:34.111862  124886 cri.go:89] found id: ""
	I1008 14:51:34.111883  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.111893  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:34.111902  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:34.111921  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:34.181743  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:34.181765  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:34.196152  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:34.196171  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:34.252034  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:34.252045  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:34.252056  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:34.316760  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:34.316781  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:36.845595  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:36.856603  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:36.856648  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:36.883175  124886 cri.go:89] found id: ""
	I1008 14:51:36.883194  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.883202  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:36.883209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:36.883267  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:36.910081  124886 cri.go:89] found id: ""
	I1008 14:51:36.910096  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.910103  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:36.910107  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:36.910157  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:36.935036  124886 cri.go:89] found id: ""
	I1008 14:51:36.935051  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.935062  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:36.935068  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:36.935122  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:36.961981  124886 cri.go:89] found id: ""
	I1008 14:51:36.961998  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.962009  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:36.962016  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:36.962126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:36.989270  124886 cri.go:89] found id: ""
	I1008 14:51:36.989290  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.989299  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:36.989306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:36.989363  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:37.016135  124886 cri.go:89] found id: ""
	I1008 14:51:37.016153  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.016161  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:37.016165  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:37.016215  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:37.043172  124886 cri.go:89] found id: ""
	I1008 14:51:37.043191  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.043201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:37.043211  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:37.043227  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:37.100326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:37.100338  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:37.100351  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:37.163756  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:37.163777  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:37.193435  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:37.193471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:37.260908  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:37.260933  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:39.777967  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:39.789007  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:39.789059  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:39.815862  124886 cri.go:89] found id: ""
	I1008 14:51:39.815879  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.815886  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:39.815890  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:39.815942  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:39.841950  124886 cri.go:89] found id: ""
	I1008 14:51:39.841966  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.841973  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:39.841979  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:39.842039  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:39.868668  124886 cri.go:89] found id: ""
	I1008 14:51:39.868686  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.868696  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:39.868702  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:39.868755  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:39.895534  124886 cri.go:89] found id: ""
	I1008 14:51:39.895554  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.895564  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:39.895571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:39.895622  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:39.922579  124886 cri.go:89] found id: ""
	I1008 14:51:39.922598  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.922608  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:39.922614  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:39.922660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:39.948340  124886 cri.go:89] found id: ""
	I1008 14:51:39.948356  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.948363  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:39.948367  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:39.948410  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:39.975730  124886 cri.go:89] found id: ""
	I1008 14:51:39.975746  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.975752  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:39.975761  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:39.975771  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:40.004995  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:40.005014  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:40.075523  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:40.075546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:40.090104  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:40.090120  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:40.147226  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:40.147238  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:40.147253  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:42.711983  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:42.723356  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:42.723413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:42.749822  124886 cri.go:89] found id: ""
	I1008 14:51:42.749838  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.749844  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:42.749849  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:42.749917  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:42.776397  124886 cri.go:89] found id: ""
	I1008 14:51:42.776414  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.776421  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:42.776425  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:42.776493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:42.802489  124886 cri.go:89] found id: ""
	I1008 14:51:42.802508  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.802518  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:42.802524  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:42.802572  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:42.829172  124886 cri.go:89] found id: ""
	I1008 14:51:42.829187  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.829193  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:42.829198  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:42.829251  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:42.853534  124886 cri.go:89] found id: ""
	I1008 14:51:42.853552  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.853561  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:42.853568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:42.853635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:42.879567  124886 cri.go:89] found id: ""
	I1008 14:51:42.879583  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.879595  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:42.879601  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:42.879652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:42.904961  124886 cri.go:89] found id: ""
	I1008 14:51:42.904979  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.904986  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:42.904993  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:42.905009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:42.974363  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:42.974384  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:42.989172  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:42.989192  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:43.045247  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:43.045260  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:43.045275  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:43.106406  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:43.106429  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:45.637311  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:45.648040  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:45.648095  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:45.673462  124886 cri.go:89] found id: ""
	I1008 14:51:45.673481  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.673491  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:45.673497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:45.673550  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:45.698163  124886 cri.go:89] found id: ""
	I1008 14:51:45.698181  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.698188  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:45.698193  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:45.698246  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:45.723467  124886 cri.go:89] found id: ""
	I1008 14:51:45.723561  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.723573  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:45.723581  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:45.723641  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:45.748702  124886 cri.go:89] found id: ""
	I1008 14:51:45.748717  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.748726  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:45.748732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:45.748796  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:45.775585  124886 cri.go:89] found id: ""
	I1008 14:51:45.775604  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.775612  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:45.775617  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:45.775670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:45.801010  124886 cri.go:89] found id: ""
	I1008 14:51:45.801025  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.801031  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:45.801036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:45.801084  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:45.827042  124886 cri.go:89] found id: ""
	I1008 14:51:45.827059  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.827067  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:45.827075  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:45.827086  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:45.895458  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:45.895480  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:45.910085  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:45.910109  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:45.966571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:45.966593  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:45.966605  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:46.027581  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:46.027606  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:48.557168  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:48.568079  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:48.568130  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:48.594574  124886 cri.go:89] found id: ""
	I1008 14:51:48.594594  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.594603  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:48.594609  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:48.594653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:48.621962  124886 cri.go:89] found id: ""
	I1008 14:51:48.621977  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.621984  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:48.621989  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:48.622035  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:48.648065  124886 cri.go:89] found id: ""
	I1008 14:51:48.648080  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.648087  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:48.648091  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:48.648146  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:48.675285  124886 cri.go:89] found id: ""
	I1008 14:51:48.675300  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.675307  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:48.675311  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:48.675356  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:48.701191  124886 cri.go:89] found id: ""
	I1008 14:51:48.701210  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.701218  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:48.701225  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:48.701271  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:48.729042  124886 cri.go:89] found id: ""
	I1008 14:51:48.729069  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.729079  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:48.729086  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:48.729136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:48.754548  124886 cri.go:89] found id: ""
	I1008 14:51:48.754564  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.754572  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:48.754580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:48.754590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:48.822673  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:48.822705  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:48.836997  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:48.837017  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:48.894196  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:48.894212  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:48.894223  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:48.955101  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:48.955127  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.487365  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:51.498554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:51.498603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:51.525066  124886 cri.go:89] found id: ""
	I1008 14:51:51.525081  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.525088  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:51.525094  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:51.525147  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:51.550909  124886 cri.go:89] found id: ""
	I1008 14:51:51.550926  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.550933  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:51.550938  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:51.550989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:51.576844  124886 cri.go:89] found id: ""
	I1008 14:51:51.576860  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.576867  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:51.576871  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:51.576919  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:51.603876  124886 cri.go:89] found id: ""
	I1008 14:51:51.603894  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.603900  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:51.603907  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:51.603958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:51.630518  124886 cri.go:89] found id: ""
	I1008 14:51:51.630533  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.630540  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:51.630545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:51.630591  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:51.656592  124886 cri.go:89] found id: ""
	I1008 14:51:51.656625  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.656634  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:51.656641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:51.656686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:51.682732  124886 cri.go:89] found id: ""
	I1008 14:51:51.682750  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.682757  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:51.682766  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:51.682775  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:51.742589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:51.742612  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.771353  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:51.771369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:51.842948  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:51.842971  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:51.857862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:51.857882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:51.915551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.417267  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:54.428273  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:54.428333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:54.454016  124886 cri.go:89] found id: ""
	I1008 14:51:54.454030  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.454037  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:54.454042  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:54.454097  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:54.479088  124886 cri.go:89] found id: ""
	I1008 14:51:54.479104  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.479112  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:54.479117  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:54.479171  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:54.504383  124886 cri.go:89] found id: ""
	I1008 14:51:54.504401  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.504411  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:54.504418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:54.504481  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:54.530502  124886 cri.go:89] found id: ""
	I1008 14:51:54.530522  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.530529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:54.530534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:54.530578  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:54.556899  124886 cri.go:89] found id: ""
	I1008 14:51:54.556920  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.556929  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:54.556935  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:54.556983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:54.582860  124886 cri.go:89] found id: ""
	I1008 14:51:54.582878  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.582888  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:54.582895  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:54.582954  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:54.609653  124886 cri.go:89] found id: ""
	I1008 14:51:54.609670  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.609679  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:54.609689  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:54.609704  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:54.666095  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.666106  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:54.666116  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:54.725670  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:54.725693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:54.755377  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:54.755394  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:54.824839  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:54.824860  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.340378  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:57.351013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:57.351087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:57.377174  124886 cri.go:89] found id: ""
	I1008 14:51:57.377192  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.377201  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:57.377208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:57.377259  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:57.403239  124886 cri.go:89] found id: ""
	I1008 14:51:57.403254  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.403261  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:57.403271  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:57.403317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:57.429149  124886 cri.go:89] found id: ""
	I1008 14:51:57.429168  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.429179  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:57.429185  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:57.429244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:57.454095  124886 cri.go:89] found id: ""
	I1008 14:51:57.454114  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.454128  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:57.454133  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:57.454187  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:57.479640  124886 cri.go:89] found id: ""
	I1008 14:51:57.479658  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.479665  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:57.479670  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:57.479725  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:57.505776  124886 cri.go:89] found id: ""
	I1008 14:51:57.505795  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.505805  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:57.505811  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:57.505853  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:57.531837  124886 cri.go:89] found id: ""
	I1008 14:51:57.531852  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.531860  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:57.531867  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:57.531878  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:57.599522  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:57.599544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.614111  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:57.614132  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:57.671063  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:57.671074  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:57.671084  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:57.732027  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:57.732050  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:00.263338  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:00.274100  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:00.274167  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:00.299677  124886 cri.go:89] found id: ""
	I1008 14:52:00.299692  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.299698  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:00.299703  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:00.299744  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:00.325037  124886 cri.go:89] found id: ""
	I1008 14:52:00.325055  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.325065  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:00.325071  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:00.325128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:00.351372  124886 cri.go:89] found id: ""
	I1008 14:52:00.351388  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.351397  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:00.351402  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:00.351465  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:00.377746  124886 cri.go:89] found id: ""
	I1008 14:52:00.377761  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.377767  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:00.377772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:00.377838  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:00.403806  124886 cri.go:89] found id: ""
	I1008 14:52:00.403821  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.403827  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:00.403832  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:00.403888  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:00.431653  124886 cri.go:89] found id: ""
	I1008 14:52:00.431673  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.431682  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:00.431687  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:00.431732  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:00.458706  124886 cri.go:89] found id: ""
	I1008 14:52:00.458720  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.458727  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:00.458735  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:00.458744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:00.527333  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:00.527355  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:00.545238  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:00.545260  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:00.604166  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:00.604178  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:00.604190  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:00.667338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:00.667360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.196993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:03.207677  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:03.207730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:03.232932  124886 cri.go:89] found id: ""
	I1008 14:52:03.232952  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.232963  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:03.232969  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:03.233019  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:03.257910  124886 cri.go:89] found id: ""
	I1008 14:52:03.257927  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.257934  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:03.257939  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:03.257989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:03.282476  124886 cri.go:89] found id: ""
	I1008 14:52:03.282491  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.282498  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:03.282503  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:03.282556  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:03.307994  124886 cri.go:89] found id: ""
	I1008 14:52:03.308009  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.308016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:03.308020  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:03.308066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:03.333961  124886 cri.go:89] found id: ""
	I1008 14:52:03.333978  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.333985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:03.333990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:03.334036  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:03.360461  124886 cri.go:89] found id: ""
	I1008 14:52:03.360480  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.360491  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:03.360498  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:03.360546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:03.385935  124886 cri.go:89] found id: ""
	I1008 14:52:03.385951  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.385958  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:03.385965  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:03.385980  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:03.399673  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:03.399689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:03.456423  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:03.456433  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:03.456459  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:03.519728  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:03.519750  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.549347  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:03.549365  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.121403  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:06.132277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:06.132329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:06.158234  124886 cri.go:89] found id: ""
	I1008 14:52:06.158248  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.158255  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:06.158260  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:06.158308  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:06.184118  124886 cri.go:89] found id: ""
	I1008 14:52:06.184136  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.184145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:06.184151  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:06.184201  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:06.210586  124886 cri.go:89] found id: ""
	I1008 14:52:06.210604  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.210613  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:06.210619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:06.210682  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:06.236986  124886 cri.go:89] found id: ""
	I1008 14:52:06.237004  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.237013  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:06.237018  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:06.237064  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:06.264151  124886 cri.go:89] found id: ""
	I1008 14:52:06.264172  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.264182  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:06.264188  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:06.264240  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:06.290106  124886 cri.go:89] found id: ""
	I1008 14:52:06.290120  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.290126  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:06.290132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:06.290177  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:06.316419  124886 cri.go:89] found id: ""
	I1008 14:52:06.316435  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.316453  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:06.316464  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:06.316477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:06.377522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:06.377544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:06.407056  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:06.407075  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.474318  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:06.474342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:06.488482  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:06.488502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:06.546904  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.048569  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:09.059380  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:09.059436  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:09.085888  124886 cri.go:89] found id: ""
	I1008 14:52:09.085906  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.085912  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:09.085918  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:09.085971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:09.113858  124886 cri.go:89] found id: ""
	I1008 14:52:09.113875  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.113882  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:09.113892  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:09.113939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:09.140388  124886 cri.go:89] found id: ""
	I1008 14:52:09.140407  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.140414  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:09.140420  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:09.140493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:09.168003  124886 cri.go:89] found id: ""
	I1008 14:52:09.168018  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.168025  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:09.168030  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:09.168075  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:09.194655  124886 cri.go:89] found id: ""
	I1008 14:52:09.194681  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.194690  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:09.194696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:09.194757  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:09.221388  124886 cri.go:89] found id: ""
	I1008 14:52:09.221405  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.221411  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:09.221416  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:09.221490  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:09.247075  124886 cri.go:89] found id: ""
	I1008 14:52:09.247093  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.247102  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:09.247122  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:09.247133  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:09.304638  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.304650  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:09.304664  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:09.368718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:09.368742  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:09.399217  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:09.399239  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:09.468608  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:09.468629  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:11.984769  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:11.995534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:11.995596  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:12.020218  124886 cri.go:89] found id: ""
	I1008 14:52:12.020234  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.020241  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:12.020247  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:12.020289  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:12.045959  124886 cri.go:89] found id: ""
	I1008 14:52:12.045978  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.045989  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:12.045996  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:12.046103  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:12.072101  124886 cri.go:89] found id: ""
	I1008 14:52:12.072118  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.072125  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:12.072129  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:12.072174  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:12.098793  124886 cri.go:89] found id: ""
	I1008 14:52:12.098808  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.098814  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:12.098819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:12.098871  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:12.124876  124886 cri.go:89] found id: ""
	I1008 14:52:12.124891  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.124900  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:12.124906  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:12.124973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:12.151678  124886 cri.go:89] found id: ""
	I1008 14:52:12.151695  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.151703  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:12.151708  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:12.151764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:12.176969  124886 cri.go:89] found id: ""
	I1008 14:52:12.176986  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.176994  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:12.177004  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:12.177019  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:12.247581  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:12.247604  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:12.262272  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:12.262290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:12.319283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:12.319306  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:12.319318  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:12.383384  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:12.383406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:14.914713  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:14.925495  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:14.925548  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:14.951182  124886 cri.go:89] found id: ""
	I1008 14:52:14.951197  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.951205  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:14.951209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:14.951265  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:14.978925  124886 cri.go:89] found id: ""
	I1008 14:52:14.978941  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.978948  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:14.978953  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:14.979004  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:15.003964  124886 cri.go:89] found id: ""
	I1008 14:52:15.003983  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.003992  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:15.003997  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:15.004061  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:15.030077  124886 cri.go:89] found id: ""
	I1008 14:52:15.030095  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.030102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:15.030107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:15.030154  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:15.055689  124886 cri.go:89] found id: ""
	I1008 14:52:15.055704  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.055711  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:15.055715  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:15.055760  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:15.081174  124886 cri.go:89] found id: ""
	I1008 14:52:15.081191  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.081198  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:15.081203  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:15.081262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:15.107235  124886 cri.go:89] found id: ""
	I1008 14:52:15.107251  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.107257  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:15.107265  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:15.107279  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:15.174130  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:15.174161  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:15.188435  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:15.188471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:15.244706  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:15.244720  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:15.244735  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:15.305071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:15.305098  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:17.835094  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:17.845787  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:17.845870  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:17.871734  124886 cri.go:89] found id: ""
	I1008 14:52:17.871749  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.871757  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:17.871764  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:17.871823  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:17.897412  124886 cri.go:89] found id: ""
	I1008 14:52:17.897433  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.897458  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:17.897467  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:17.897535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:17.925096  124886 cri.go:89] found id: ""
	I1008 14:52:17.925110  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.925117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:17.925122  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:17.925168  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:17.951272  124886 cri.go:89] found id: ""
	I1008 14:52:17.951289  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.951297  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:17.951301  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:17.951347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:17.976965  124886 cri.go:89] found id: ""
	I1008 14:52:17.976985  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.976992  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:17.976998  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:17.977042  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:18.003041  124886 cri.go:89] found id: ""
	I1008 14:52:18.003057  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.003064  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:18.003069  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:18.003113  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:18.028732  124886 cri.go:89] found id: ""
	I1008 14:52:18.028748  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.028756  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:18.028764  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:18.028774  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:18.092440  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:18.092467  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:18.121965  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:18.121984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:18.191653  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:18.191679  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:18.205820  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:18.205839  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:18.261002  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:20.762706  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:20.773592  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:20.773660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:20.799324  124886 cri.go:89] found id: ""
	I1008 14:52:20.799340  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.799347  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:20.799352  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:20.799394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:20.825415  124886 cri.go:89] found id: ""
	I1008 14:52:20.825430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.825436  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:20.825452  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:20.825504  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:20.851415  124886 cri.go:89] found id: ""
	I1008 14:52:20.851430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.851437  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:20.851454  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:20.851503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:20.878438  124886 cri.go:89] found id: ""
	I1008 14:52:20.878476  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.878484  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:20.878489  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:20.878536  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:20.903857  124886 cri.go:89] found id: ""
	I1008 14:52:20.903873  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.903884  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:20.903890  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:20.903948  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:20.930746  124886 cri.go:89] found id: ""
	I1008 14:52:20.930763  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.930770  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:20.930791  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:20.930842  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:20.956487  124886 cri.go:89] found id: ""
	I1008 14:52:20.956504  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.956510  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:20.956518  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:20.956528  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:21.026065  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:21.026087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:21.040112  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:21.040129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:21.095891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:21.095902  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:21.095914  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:21.159107  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:21.159129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:23.687668  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:23.698250  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:23.698317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:23.723805  124886 cri.go:89] found id: ""
	I1008 14:52:23.723832  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.723842  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:23.723850  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:23.723900  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:23.749813  124886 cri.go:89] found id: ""
	I1008 14:52:23.749831  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.749840  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:23.749847  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:23.749918  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:23.774918  124886 cri.go:89] found id: ""
	I1008 14:52:23.774934  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.774940  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:23.774945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:23.774999  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:23.800898  124886 cri.go:89] found id: ""
	I1008 14:52:23.800918  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.800925  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:23.800930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:23.800978  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:23.827330  124886 cri.go:89] found id: ""
	I1008 14:52:23.827348  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.827356  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:23.827360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:23.827405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:23.853485  124886 cri.go:89] found id: ""
	I1008 14:52:23.853503  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.853510  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:23.853515  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:23.853560  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:23.878936  124886 cri.go:89] found id: ""
	I1008 14:52:23.878957  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.878967  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:23.878976  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:23.878994  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:23.934831  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:23.934841  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:23.934851  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:23.993858  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:23.993885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:24.022945  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:24.022962  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:24.092836  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:24.092865  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.608369  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:26.619983  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:26.620060  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:26.646593  124886 cri.go:89] found id: ""
	I1008 14:52:26.646611  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.646621  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:26.646627  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:26.646678  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:26.673294  124886 cri.go:89] found id: ""
	I1008 14:52:26.673310  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.673317  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:26.673324  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:26.673367  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:26.699235  124886 cri.go:89] found id: ""
	I1008 14:52:26.699251  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.699257  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:26.699262  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:26.699320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:26.724993  124886 cri.go:89] found id: ""
	I1008 14:52:26.725009  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.725016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:26.725021  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:26.725074  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:26.749744  124886 cri.go:89] found id: ""
	I1008 14:52:26.749760  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.749767  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:26.749772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:26.749821  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:26.775226  124886 cri.go:89] found id: ""
	I1008 14:52:26.775246  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.775255  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:26.775260  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:26.775316  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:26.805104  124886 cri.go:89] found id: ""
	I1008 14:52:26.805120  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.805128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:26.805136  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:26.805152  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:26.834601  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:26.834618  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:26.900340  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:26.900361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.914389  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:26.914406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:26.969896  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:26.969911  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:26.969927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.531143  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:29.542884  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:29.542952  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:29.570323  124886 cri.go:89] found id: ""
	I1008 14:52:29.570339  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.570345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:29.570350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:29.570395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:29.596735  124886 cri.go:89] found id: ""
	I1008 14:52:29.596750  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.596756  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:29.596762  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:29.596811  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:29.622878  124886 cri.go:89] found id: ""
	I1008 14:52:29.622892  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.622898  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:29.622903  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:29.622950  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:29.648836  124886 cri.go:89] found id: ""
	I1008 14:52:29.648857  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.648880  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:29.648887  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:29.648939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:29.674729  124886 cri.go:89] found id: ""
	I1008 14:52:29.674747  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.674753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:29.674758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:29.674802  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:29.700542  124886 cri.go:89] found id: ""
	I1008 14:52:29.700558  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.700565  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:29.700571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:29.700615  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:29.726353  124886 cri.go:89] found id: ""
	I1008 14:52:29.726369  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.726375  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:29.726383  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:29.726395  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:29.790538  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:29.790560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:29.805071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:29.805087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:29.861336  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:29.861354  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:29.861367  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.921484  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:29.921507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.452001  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:32.462783  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:32.462839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:32.488895  124886 cri.go:89] found id: ""
	I1008 14:52:32.488913  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.488922  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:32.488929  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:32.488977  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:32.514655  124886 cri.go:89] found id: ""
	I1008 14:52:32.514674  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.514683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:32.514688  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:32.514739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:32.542007  124886 cri.go:89] found id: ""
	I1008 14:52:32.542027  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.542037  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:32.542044  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:32.542100  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:32.569946  124886 cri.go:89] found id: ""
	I1008 14:52:32.569963  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.569970  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:32.569976  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:32.570022  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:32.595032  124886 cri.go:89] found id: ""
	I1008 14:52:32.595051  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.595061  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:32.595066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:32.595127  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:32.621883  124886 cri.go:89] found id: ""
	I1008 14:52:32.621903  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.621923  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:32.621930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:32.621983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:32.647589  124886 cri.go:89] found id: ""
	I1008 14:52:32.647606  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.647612  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:32.647620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:32.647630  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:32.703098  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:32.703108  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:32.703129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:32.766481  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:32.766502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.794530  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:32.794546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:32.864662  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:32.864687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.381050  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:35.391807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:35.391868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:35.418369  124886 cri.go:89] found id: ""
	I1008 14:52:35.418388  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.418397  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:35.418402  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:35.418467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:35.444660  124886 cri.go:89] found id: ""
	I1008 14:52:35.444676  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.444683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:35.444687  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:35.444736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:35.471158  124886 cri.go:89] found id: ""
	I1008 14:52:35.471183  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.471190  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:35.471195  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:35.471238  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:35.496271  124886 cri.go:89] found id: ""
	I1008 14:52:35.496288  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.496295  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:35.496300  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:35.496345  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:35.521987  124886 cri.go:89] found id: ""
	I1008 14:52:35.522005  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.522015  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:35.522039  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:35.522098  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:35.547647  124886 cri.go:89] found id: ""
	I1008 14:52:35.547664  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.547673  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:35.547678  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:35.547723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:35.573056  124886 cri.go:89] found id: ""
	I1008 14:52:35.573075  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.573085  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:35.573109  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:35.573123  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:35.640898  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:35.640923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.655247  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:35.655265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:35.712555  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:35.712565  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:35.712575  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:35.772556  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:35.772579  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.301881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:38.312627  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:38.312694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:38.337192  124886 cri.go:89] found id: ""
	I1008 14:52:38.337210  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.337220  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:38.337227  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:38.337278  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:38.361703  124886 cri.go:89] found id: ""
	I1008 14:52:38.361721  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.361730  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:38.361736  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:38.361786  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:38.387263  124886 cri.go:89] found id: ""
	I1008 14:52:38.387279  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.387286  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:38.387290  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:38.387334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:38.413808  124886 cri.go:89] found id: ""
	I1008 14:52:38.413824  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.413830  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:38.413835  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:38.413880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:38.440014  124886 cri.go:89] found id: ""
	I1008 14:52:38.440029  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.440036  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:38.440041  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:38.440085  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:38.466144  124886 cri.go:89] found id: ""
	I1008 14:52:38.466164  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.466174  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:38.466181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:38.466229  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:38.491536  124886 cri.go:89] found id: ""
	I1008 14:52:38.491554  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.491563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:38.491573  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:38.491584  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.520248  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:38.520265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:38.588833  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:38.588861  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:38.603136  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:38.603155  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:38.659278  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:38.659290  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:38.659301  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.224716  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:41.235550  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:41.235600  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:41.261421  124886 cri.go:89] found id: ""
	I1008 14:52:41.261436  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.261455  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:41.261463  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:41.261516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:41.286798  124886 cri.go:89] found id: ""
	I1008 14:52:41.286813  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.286839  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:41.286844  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:41.286904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:41.312542  124886 cri.go:89] found id: ""
	I1008 14:52:41.312558  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.312567  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:41.312574  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:41.312623  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:41.339001  124886 cri.go:89] found id: ""
	I1008 14:52:41.339016  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.339022  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:41.339027  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:41.339073  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:41.365019  124886 cri.go:89] found id: ""
	I1008 14:52:41.365040  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.365049  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:41.365056  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:41.365115  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:41.389878  124886 cri.go:89] found id: ""
	I1008 14:52:41.389897  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.389904  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:41.389910  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:41.389960  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:41.415856  124886 cri.go:89] found id: ""
	I1008 14:52:41.415875  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.415884  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:41.415895  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:41.415909  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:41.481175  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:41.481196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:41.495356  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:41.495373  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:41.552891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:41.552910  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:41.552927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.615245  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:41.615282  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:44.146351  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:44.157234  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:44.157294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:44.183016  124886 cri.go:89] found id: ""
	I1008 14:52:44.183032  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.183039  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:44.183044  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:44.183094  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:44.209452  124886 cri.go:89] found id: ""
	I1008 14:52:44.209471  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.209480  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:44.209487  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:44.209535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:44.236057  124886 cri.go:89] found id: ""
	I1008 14:52:44.236079  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.236088  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:44.236094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:44.236165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:44.262249  124886 cri.go:89] found id: ""
	I1008 14:52:44.262265  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.262274  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:44.262281  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:44.262333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:44.288222  124886 cri.go:89] found id: ""
	I1008 14:52:44.288240  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.288249  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:44.288254  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:44.288303  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:44.312991  124886 cri.go:89] found id: ""
	I1008 14:52:44.313009  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.313017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:44.313022  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:44.313066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:44.338794  124886 cri.go:89] found id: ""
	I1008 14:52:44.338814  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.338823  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:44.338835  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:44.338849  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:44.408632  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:44.408655  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:44.423360  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:44.423381  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:44.481035  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:44.481052  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:44.481068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:44.545061  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:44.545093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.075772  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:47.086739  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:47.086782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:47.112465  124886 cri.go:89] found id: ""
	I1008 14:52:47.112483  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.112492  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:47.112497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:47.112546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:47.140124  124886 cri.go:89] found id: ""
	I1008 14:52:47.140139  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.140145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:47.140150  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:47.140194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:47.167347  124886 cri.go:89] found id: ""
	I1008 14:52:47.167366  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.167376  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:47.167382  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:47.167428  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:47.193008  124886 cri.go:89] found id: ""
	I1008 14:52:47.193025  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.193032  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:47.193037  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:47.193081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:47.218907  124886 cri.go:89] found id: ""
	I1008 14:52:47.218922  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.218932  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:47.218938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:47.218992  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:47.244390  124886 cri.go:89] found id: ""
	I1008 14:52:47.244406  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.244413  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:47.244418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:47.244485  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:47.270432  124886 cri.go:89] found id: ""
	I1008 14:52:47.270460  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.270473  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:47.270482  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:47.270496  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:47.284419  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:47.284434  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:47.340814  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:47.340829  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:47.340840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:47.405347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:47.405371  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.434675  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:47.434693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:50.001509  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:50.012521  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:50.012580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:50.038871  124886 cri.go:89] found id: ""
	I1008 14:52:50.038886  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.038895  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:50.038901  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:50.038945  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:50.065691  124886 cri.go:89] found id: ""
	I1008 14:52:50.065707  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.065713  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:50.065718  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:50.065764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:50.091421  124886 cri.go:89] found id: ""
	I1008 14:52:50.091439  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.091459  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:50.091466  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:50.091516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:50.117900  124886 cri.go:89] found id: ""
	I1008 14:52:50.117916  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.117922  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:50.117927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:50.117971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:50.143795  124886 cri.go:89] found id: ""
	I1008 14:52:50.143811  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.143837  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:50.143842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:50.143889  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:50.170009  124886 cri.go:89] found id: ""
	I1008 14:52:50.170025  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.170032  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:50.170036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:50.170081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:50.195182  124886 cri.go:89] found id: ""
	I1008 14:52:50.195198  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.195204  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:50.195213  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:50.195226  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:50.208906  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:50.208923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:50.263732  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:50.263744  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:50.263754  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:50.321967  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:50.321990  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:50.350825  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:50.350843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:52.919243  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:52.929975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:52.930069  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:52.956423  124886 cri.go:89] found id: ""
	I1008 14:52:52.956439  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.956463  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:52.956470  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:52.956519  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:52.982128  124886 cri.go:89] found id: ""
	I1008 14:52:52.982143  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.982150  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:52.982155  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:52.982204  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:53.008335  124886 cri.go:89] found id: ""
	I1008 14:52:53.008351  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.008358  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:53.008363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:53.008416  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:53.035683  124886 cri.go:89] found id: ""
	I1008 14:52:53.035698  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.035705  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:53.035710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:53.035753  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:53.061482  124886 cri.go:89] found id: ""
	I1008 14:52:53.061590  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.061610  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:53.061619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:53.061673  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:53.088358  124886 cri.go:89] found id: ""
	I1008 14:52:53.088375  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.088384  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:53.088390  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:53.088467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:53.113970  124886 cri.go:89] found id: ""
	I1008 14:52:53.113988  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.113995  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:53.114003  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:53.114016  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:53.181486  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:53.181511  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:53.195603  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:53.195620  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:53.251571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:53.251582  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:53.251592  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:53.312589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:53.312610  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:55.843180  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:55.854192  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:55.854250  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:55.878967  124886 cri.go:89] found id: ""
	I1008 14:52:55.878984  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.878992  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:55.878997  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:55.879050  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:55.904136  124886 cri.go:89] found id: ""
	I1008 14:52:55.904151  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.904157  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:55.904174  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:55.904216  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:55.928319  124886 cri.go:89] found id: ""
	I1008 14:52:55.928337  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.928348  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:55.928353  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:55.928406  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:55.955314  124886 cri.go:89] found id: ""
	I1008 14:52:55.955330  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.955338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:55.955345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:55.955405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:55.980957  124886 cri.go:89] found id: ""
	I1008 14:52:55.980976  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.980985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:55.980992  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:55.981040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:56.006492  124886 cri.go:89] found id: ""
	I1008 14:52:56.006507  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.006514  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:56.006519  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:56.006566  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:56.032919  124886 cri.go:89] found id: ""
	I1008 14:52:56.032934  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.032940  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:56.032948  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:56.032960  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:56.061693  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:56.061713  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:56.127262  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:56.127284  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:56.141728  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:56.141744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:56.197783  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:56.197799  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:56.197815  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:58.759309  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:58.770096  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:58.770150  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:58.796177  124886 cri.go:89] found id: ""
	I1008 14:52:58.796192  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.796199  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:58.796208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:58.796260  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:58.821988  124886 cri.go:89] found id: ""
	I1008 14:52:58.822006  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.822013  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:58.822018  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:58.822068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:58.847935  124886 cri.go:89] found id: ""
	I1008 14:52:58.847953  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.847961  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:58.847966  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:58.848015  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:58.874796  124886 cri.go:89] found id: ""
	I1008 14:52:58.874814  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.874821  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:58.874826  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:58.874880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:58.899925  124886 cri.go:89] found id: ""
	I1008 14:52:58.899941  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.899948  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:58.899953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:58.900008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:58.926934  124886 cri.go:89] found id: ""
	I1008 14:52:58.926950  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.926958  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:58.926963  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:58.927006  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:58.953664  124886 cri.go:89] found id: ""
	I1008 14:52:58.953680  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.953687  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:58.953694  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:58.953709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:59.010616  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:59.010629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:59.010640  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:59.071358  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:59.071382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:59.099863  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:59.099886  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:59.168071  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:59.168163  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.684667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:01.695456  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:01.695524  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:01.721627  124886 cri.go:89] found id: ""
	I1008 14:53:01.721644  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.721652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:01.721656  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:01.721715  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:01.748495  124886 cri.go:89] found id: ""
	I1008 14:53:01.748512  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.748518  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:01.748523  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:01.748583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:01.774281  124886 cri.go:89] found id: ""
	I1008 14:53:01.774298  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.774310  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:01.774316  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:01.774377  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:01.800414  124886 cri.go:89] found id: ""
	I1008 14:53:01.800430  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.800437  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:01.800458  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:01.800513  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:01.825727  124886 cri.go:89] found id: ""
	I1008 14:53:01.825746  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.825753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:01.825758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:01.825804  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:01.852777  124886 cri.go:89] found id: ""
	I1008 14:53:01.852794  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.852802  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:01.852807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:01.852855  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:01.879499  124886 cri.go:89] found id: ""
	I1008 14:53:01.879516  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.879522  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:01.879530  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:01.879542  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:01.908367  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:01.908386  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:01.976337  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:01.976358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.990844  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:01.990863  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:02.047840  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:02.047852  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:02.047864  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.612824  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:04.623886  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:04.623937  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:04.650245  124886 cri.go:89] found id: ""
	I1008 14:53:04.650265  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.650274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:04.650282  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:04.650338  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:04.675795  124886 cri.go:89] found id: ""
	I1008 14:53:04.675814  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.675849  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:04.675856  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:04.675910  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:04.701855  124886 cri.go:89] found id: ""
	I1008 14:53:04.701874  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.701883  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:04.701889  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:04.701951  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:04.727569  124886 cri.go:89] found id: ""
	I1008 14:53:04.727584  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.727590  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:04.727595  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:04.727637  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:04.753254  124886 cri.go:89] found id: ""
	I1008 14:53:04.753269  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.753276  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:04.753280  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:04.753329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:04.779529  124886 cri.go:89] found id: ""
	I1008 14:53:04.779548  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.779557  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:04.779564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:04.779611  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:04.806307  124886 cri.go:89] found id: ""
	I1008 14:53:04.806326  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.806335  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:04.806346  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:04.806361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:04.820357  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:04.820374  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:04.876718  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:04.876732  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:04.876748  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.940387  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:04.940412  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:04.969994  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:04.970009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.538422  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:07.550831  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:07.550884  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:07.577673  124886 cri.go:89] found id: ""
	I1008 14:53:07.577687  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.577693  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:07.577698  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:07.577750  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:07.603662  124886 cri.go:89] found id: ""
	I1008 14:53:07.603680  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.603695  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:07.603700  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:07.603746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:07.629802  124886 cri.go:89] found id: ""
	I1008 14:53:07.629821  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.629830  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:07.629834  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:07.629886  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:07.656081  124886 cri.go:89] found id: ""
	I1008 14:53:07.656096  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.656102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:07.656107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:07.656170  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:07.682162  124886 cri.go:89] found id: ""
	I1008 14:53:07.682177  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.682184  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:07.682189  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:07.682233  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:07.708617  124886 cri.go:89] found id: ""
	I1008 14:53:07.708635  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.708648  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:07.708653  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:07.708708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:07.734755  124886 cri.go:89] found id: ""
	I1008 14:53:07.734772  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.734782  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:07.734793  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:07.734807  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:07.794522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:07.794548  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:07.823563  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:07.823581  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.892786  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:07.892808  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:07.907262  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:07.907281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:07.962940  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.464656  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:10.476746  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:10.476800  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:10.502937  124886 cri.go:89] found id: ""
	I1008 14:53:10.502958  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.502968  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:10.502974  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:10.503025  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:10.529780  124886 cri.go:89] found id: ""
	I1008 14:53:10.529796  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.529803  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:10.529807  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:10.529856  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:10.556092  124886 cri.go:89] found id: ""
	I1008 14:53:10.556108  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.556117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:10.556124  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:10.556184  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:10.582264  124886 cri.go:89] found id: ""
	I1008 14:53:10.582281  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.582290  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:10.582296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:10.582354  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:10.608631  124886 cri.go:89] found id: ""
	I1008 14:53:10.608647  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.608655  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:10.608662  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:10.608721  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:10.635697  124886 cri.go:89] found id: ""
	I1008 14:53:10.635715  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.635725  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:10.635732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:10.635793  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:10.661998  124886 cri.go:89] found id: ""
	I1008 14:53:10.662018  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.662028  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:10.662040  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:10.662055  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:10.728096  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:10.728121  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:10.742521  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:10.742543  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:10.799551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.799566  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:10.799578  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:10.863614  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:10.863636  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.396084  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:13.407066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:13.407128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:13.433323  124886 cri.go:89] found id: ""
	I1008 14:53:13.433339  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.433345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:13.433350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:13.433393  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:13.460409  124886 cri.go:89] found id: ""
	I1008 14:53:13.460510  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.460522  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:13.460528  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:13.460589  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:13.487660  124886 cri.go:89] found id: ""
	I1008 14:53:13.487679  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.487689  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:13.487696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:13.487746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:13.515522  124886 cri.go:89] found id: ""
	I1008 14:53:13.515538  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.515546  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:13.515551  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:13.515595  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:13.540751  124886 cri.go:89] found id: ""
	I1008 14:53:13.540767  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.540773  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:13.540778  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:13.540846  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:13.566812  124886 cri.go:89] found id: ""
	I1008 14:53:13.566829  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.566837  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:13.566842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:13.566904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:13.593236  124886 cri.go:89] found id: ""
	I1008 14:53:13.593255  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.593262  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:13.593271  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:13.593281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:13.657627  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:13.657651  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.686303  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:13.686320  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:13.755568  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:13.755591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:13.769800  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:13.769819  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:13.826318  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:16.327013  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:16.337840  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:16.337908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:16.363203  124886 cri.go:89] found id: ""
	I1008 14:53:16.363221  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.363230  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:16.363235  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:16.363288  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:16.388535  124886 cri.go:89] found id: ""
	I1008 14:53:16.388551  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.388557  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:16.388563  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:16.388606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:16.414195  124886 cri.go:89] found id: ""
	I1008 14:53:16.414213  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.414221  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:16.414226  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:16.414274  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:16.440199  124886 cri.go:89] found id: ""
	I1008 14:53:16.440214  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.440221  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:16.440227  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:16.440283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:16.465899  124886 cri.go:89] found id: ""
	I1008 14:53:16.465918  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.465925  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:16.465931  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:16.465976  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:16.491135  124886 cri.go:89] found id: ""
	I1008 14:53:16.491151  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.491157  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:16.491162  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:16.491205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:16.517298  124886 cri.go:89] found id: ""
	I1008 14:53:16.517315  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.517323  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:16.517331  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:16.517342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:16.581777  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:16.581803  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:16.611824  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:16.611843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:16.679935  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:16.679957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:16.694087  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:16.694103  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:16.750382  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:19.252068  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:19.262927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:19.262980  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:19.288263  124886 cri.go:89] found id: ""
	I1008 14:53:19.288280  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.288286  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:19.288291  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:19.288334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:19.314749  124886 cri.go:89] found id: ""
	I1008 14:53:19.314769  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.314776  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:19.314781  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:19.314833  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:19.343105  124886 cri.go:89] found id: ""
	I1008 14:53:19.343124  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.343132  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:19.343137  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:19.343194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:19.369348  124886 cri.go:89] found id: ""
	I1008 14:53:19.369367  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.369376  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:19.369384  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:19.369438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:19.394541  124886 cri.go:89] found id: ""
	I1008 14:53:19.394556  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.394564  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:19.394569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:19.394617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:19.419883  124886 cri.go:89] found id: ""
	I1008 14:53:19.419900  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.419907  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:19.419911  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:19.419959  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:19.447316  124886 cri.go:89] found id: ""
	I1008 14:53:19.447332  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.447339  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:19.447347  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:19.447360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:19.509190  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:19.509213  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:19.538580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:19.538601  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:19.610379  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:19.610406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:19.625094  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:19.625115  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:19.682583  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:22.184381  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:22.195435  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:22.195496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:22.222530  124886 cri.go:89] found id: ""
	I1008 14:53:22.222549  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.222559  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:22.222565  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:22.222631  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:22.249103  124886 cri.go:89] found id: ""
	I1008 14:53:22.249118  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.249125  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:22.249130  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:22.249185  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:22.275859  124886 cri.go:89] found id: ""
	I1008 14:53:22.275877  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.275886  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:22.275891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:22.275944  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:22.301816  124886 cri.go:89] found id: ""
	I1008 14:53:22.301835  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.301845  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:22.301852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:22.301906  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:22.328795  124886 cri.go:89] found id: ""
	I1008 14:53:22.328810  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.328817  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:22.328821  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:22.328877  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:22.355119  124886 cri.go:89] found id: ""
	I1008 14:53:22.355134  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.355141  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:22.355146  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:22.355200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:22.382211  124886 cri.go:89] found id: ""
	I1008 14:53:22.382229  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.382238  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:22.382248  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:22.382262  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:22.442814  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:22.442840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:22.473721  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:22.473746  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:22.539788  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:22.539811  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:22.554277  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:22.554295  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:22.610102  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.110358  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:25.121359  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:25.121409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:25.146726  124886 cri.go:89] found id: ""
	I1008 14:53:25.146741  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.146747  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:25.146752  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:25.146797  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:25.173762  124886 cri.go:89] found id: ""
	I1008 14:53:25.173780  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.173788  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:25.173792  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:25.173839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:25.200613  124886 cri.go:89] found id: ""
	I1008 14:53:25.200630  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.200636  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:25.200641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:25.200686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:25.227307  124886 cri.go:89] found id: ""
	I1008 14:53:25.227327  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.227338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:25.227345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:25.227395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:25.253257  124886 cri.go:89] found id: ""
	I1008 14:53:25.253272  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.253278  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:25.253283  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:25.253329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:25.281060  124886 cri.go:89] found id: ""
	I1008 14:53:25.281077  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.281089  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:25.281094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:25.281140  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:25.306651  124886 cri.go:89] found id: ""
	I1008 14:53:25.306668  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.306678  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:25.306688  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:25.306699  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:25.373410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:25.373433  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:25.388282  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:25.388304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:25.445863  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.445874  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:25.445885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:25.510564  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:25.510590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.041417  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:28.052378  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:28.052432  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:28.078711  124886 cri.go:89] found id: ""
	I1008 14:53:28.078728  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.078734  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:28.078740  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:28.078782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:28.105010  124886 cri.go:89] found id: ""
	I1008 14:53:28.105025  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.105031  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:28.105036  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:28.105088  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:28.131983  124886 cri.go:89] found id: ""
	I1008 14:53:28.132001  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.132011  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:28.132017  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:28.132076  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:28.159135  124886 cri.go:89] found id: ""
	I1008 14:53:28.159153  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.159160  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:28.159166  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:28.159212  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:28.187793  124886 cri.go:89] found id: ""
	I1008 14:53:28.187811  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.187821  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:28.187827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:28.187872  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:28.214232  124886 cri.go:89] found id: ""
	I1008 14:53:28.214251  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.214265  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:28.214272  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:28.214335  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:28.240649  124886 cri.go:89] found id: ""
	I1008 14:53:28.240663  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.240669  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:28.240677  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:28.240687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:28.304071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:28.304094  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.333331  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:28.333346  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:28.401896  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:28.401919  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:28.416514  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:28.416531  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:28.472271  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:30.972553  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:30.983612  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:30.983666  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:31.011336  124886 cri.go:89] found id: ""
	I1008 14:53:31.011350  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.011357  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:31.011362  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:31.011405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:31.036913  124886 cri.go:89] found id: ""
	I1008 14:53:31.036935  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.036944  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:31.036948  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:31.037003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:31.063500  124886 cri.go:89] found id: ""
	I1008 14:53:31.063516  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.063523  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:31.063527  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:31.063582  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:31.091035  124886 cri.go:89] found id: ""
	I1008 14:53:31.091057  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.091066  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:31.091073  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:31.091123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:31.117295  124886 cri.go:89] found id: ""
	I1008 14:53:31.117310  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.117317  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:31.117322  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:31.117372  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:31.143795  124886 cri.go:89] found id: ""
	I1008 14:53:31.143810  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.143815  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:31.143820  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:31.143863  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:31.170134  124886 cri.go:89] found id: ""
	I1008 14:53:31.170150  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.170157  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:31.170164  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:31.170174  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:31.241300  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:31.241324  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:31.255637  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:31.255656  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:31.312716  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:31.312725  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:31.312736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:31.377091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:31.377114  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:33.907080  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:33.918207  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:33.918262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:33.944092  124886 cri.go:89] found id: ""
	I1008 14:53:33.944111  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.944122  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:33.944129  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:33.944192  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:33.970271  124886 cri.go:89] found id: ""
	I1008 14:53:33.970286  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.970293  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:33.970298  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:33.970347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:33.996407  124886 cri.go:89] found id: ""
	I1008 14:53:33.996421  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.996427  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:33.996433  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:33.996503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:34.023513  124886 cri.go:89] found id: ""
	I1008 14:53:34.023533  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.023542  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:34.023549  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:34.023606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:34.050777  124886 cri.go:89] found id: ""
	I1008 14:53:34.050797  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.050807  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:34.050813  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:34.050868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:34.077691  124886 cri.go:89] found id: ""
	I1008 14:53:34.077710  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.077719  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:34.077724  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:34.077769  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:34.104354  124886 cri.go:89] found id: ""
	I1008 14:53:34.104373  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.104380  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:34.104388  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:34.104404  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:34.171873  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:34.171899  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:34.185891  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:34.185908  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:34.243162  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:34.243172  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:34.243185  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:34.306766  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:34.306791  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:36.836905  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:36.848013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:36.848068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:36.873912  124886 cri.go:89] found id: ""
	I1008 14:53:36.873930  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.873938  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:36.873944  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:36.873994  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:36.899859  124886 cri.go:89] found id: ""
	I1008 14:53:36.899875  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.899881  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:36.899886  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:36.899930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:36.926292  124886 cri.go:89] found id: ""
	I1008 14:53:36.926314  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.926321  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:36.926326  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:36.926370  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:36.952172  124886 cri.go:89] found id: ""
	I1008 14:53:36.952189  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.952196  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:36.952201  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:36.952248  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:36.978525  124886 cri.go:89] found id: ""
	I1008 14:53:36.978542  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.978548  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:36.978553  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:36.978605  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:37.005955  124886 cri.go:89] found id: ""
	I1008 14:53:37.005973  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.005984  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:37.005990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:37.006037  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:37.032282  124886 cri.go:89] found id: ""
	I1008 14:53:37.032300  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.032310  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:37.032320  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:37.032336  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:37.100471  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:37.100507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:37.114707  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:37.114727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:37.173117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:37.173128  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:37.173138  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:37.237613  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:37.237637  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:39.769167  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:39.780181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:39.780239  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:39.805900  124886 cri.go:89] found id: ""
	I1008 14:53:39.805921  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.805928  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:39.805935  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:39.805982  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:39.832463  124886 cri.go:89] found id: ""
	I1008 14:53:39.832485  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.832493  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:39.832501  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:39.832565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:39.859105  124886 cri.go:89] found id: ""
	I1008 14:53:39.859120  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.859127  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:39.859132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:39.859176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:39.885372  124886 cri.go:89] found id: ""
	I1008 14:53:39.885395  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.885402  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:39.885410  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:39.885476  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:39.911669  124886 cri.go:89] found id: ""
	I1008 14:53:39.911684  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.911691  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:39.911696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:39.911743  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:39.939236  124886 cri.go:89] found id: ""
	I1008 14:53:39.939254  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.939263  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:39.939269  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:39.939329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:39.967816  124886 cri.go:89] found id: ""
	I1008 14:53:39.967833  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.967839  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:39.967847  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:39.967859  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:39.982071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:39.982090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:40.038524  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:40.038545  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:40.038560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:40.099347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:40.099369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:40.128637  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:40.128654  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.700345  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:42.711170  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:42.711224  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:42.738404  124886 cri.go:89] found id: ""
	I1008 14:53:42.738420  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.738426  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:42.738431  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:42.738496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:42.765170  124886 cri.go:89] found id: ""
	I1008 14:53:42.765185  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.765192  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:42.765196  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:42.765244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:42.790844  124886 cri.go:89] found id: ""
	I1008 14:53:42.790862  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.790870  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:42.790876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:42.790920  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:42.817749  124886 cri.go:89] found id: ""
	I1008 14:53:42.817765  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.817772  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:42.817777  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:42.817826  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:42.844796  124886 cri.go:89] found id: ""
	I1008 14:53:42.844815  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.844823  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:42.844827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:42.844882  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:42.870976  124886 cri.go:89] found id: ""
	I1008 14:53:42.870993  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.871001  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:42.871006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:42.871051  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:42.897679  124886 cri.go:89] found id: ""
	I1008 14:53:42.897698  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.897707  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:42.897716  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:42.897727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.967720  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:42.967744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:42.981967  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:42.981984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:43.039728  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:43.039742  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:43.039753  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:43.101886  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:43.101911  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:45.635598  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:45.646564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:45.646617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:45.673775  124886 cri.go:89] found id: ""
	I1008 14:53:45.673791  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.673797  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:45.673802  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:45.673845  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:45.700610  124886 cri.go:89] found id: ""
	I1008 14:53:45.700627  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.700633  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:45.700638  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:45.700694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:45.726636  124886 cri.go:89] found id: ""
	I1008 14:53:45.726653  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.726662  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:45.726669  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:45.726723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:45.753352  124886 cri.go:89] found id: ""
	I1008 14:53:45.753367  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.753374  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:45.753379  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:45.753434  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:45.780250  124886 cri.go:89] found id: ""
	I1008 14:53:45.780266  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.780272  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:45.780277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:45.780326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:45.805847  124886 cri.go:89] found id: ""
	I1008 14:53:45.805863  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.805870  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:45.805875  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:45.805940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:45.832274  124886 cri.go:89] found id: ""
	I1008 14:53:45.832290  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.832297  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:45.832304  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:45.832315  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:45.901895  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:45.901925  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:45.916420  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:45.916438  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:45.972937  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:45.972948  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:45.972958  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:46.034817  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:46.034841  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.564993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:48.576052  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:48.576102  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:48.602007  124886 cri.go:89] found id: ""
	I1008 14:53:48.602024  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.602031  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:48.602035  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:48.602080  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:48.628143  124886 cri.go:89] found id: ""
	I1008 14:53:48.628160  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.628168  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:48.628173  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:48.628218  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:48.655880  124886 cri.go:89] found id: ""
	I1008 14:53:48.655898  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.655907  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:48.655913  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:48.655958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:48.683255  124886 cri.go:89] found id: ""
	I1008 14:53:48.683270  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.683278  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:48.683284  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:48.683337  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:48.709473  124886 cri.go:89] found id: ""
	I1008 14:53:48.709492  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.709501  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:48.709508  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:48.709567  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:48.736246  124886 cri.go:89] found id: ""
	I1008 14:53:48.736268  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.736274  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:48.736279  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:48.736327  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:48.763463  124886 cri.go:89] found id: ""
	I1008 14:53:48.763483  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.763493  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:48.763503  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:48.763518  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.792359  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:48.792378  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:48.859056  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:48.859077  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:48.873385  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:48.873405  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:48.931065  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:48.931075  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:48.931087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:51.494941  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:51.505819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:51.505869  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:51.533622  124886 cri.go:89] found id: ""
	I1008 14:53:51.533643  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.533652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:51.533659  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:51.533707  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:51.560499  124886 cri.go:89] found id: ""
	I1008 14:53:51.560519  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.560528  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:51.560536  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:51.560584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:51.587541  124886 cri.go:89] found id: ""
	I1008 14:53:51.587556  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.587564  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:51.587569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:51.587616  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:51.614266  124886 cri.go:89] found id: ""
	I1008 14:53:51.614284  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.614291  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:51.614296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:51.614343  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:51.639614  124886 cri.go:89] found id: ""
	I1008 14:53:51.639632  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.639641  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:51.639649  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:51.639708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:51.667306  124886 cri.go:89] found id: ""
	I1008 14:53:51.667322  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.667328  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:51.667333  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:51.667375  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:51.692160  124886 cri.go:89] found id: ""
	I1008 14:53:51.692175  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.692182  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:51.692191  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:51.692204  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:51.720341  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:51.720358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:51.785600  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:51.785622  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:51.800298  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:51.800317  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:51.857283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:51.857293  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:51.857304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:54.424673  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:54.435975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:54.436023  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:54.462429  124886 cri.go:89] found id: ""
	I1008 14:53:54.462462  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.462472  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:54.462479  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:54.462528  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:54.489261  124886 cri.go:89] found id: ""
	I1008 14:53:54.489276  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.489284  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:54.489289  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:54.489344  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:54.514962  124886 cri.go:89] found id: ""
	I1008 14:53:54.514980  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.514990  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:54.514996  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:54.515040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:54.541414  124886 cri.go:89] found id: ""
	I1008 14:53:54.541428  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.541435  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:54.541439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:54.541501  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:54.567913  124886 cri.go:89] found id: ""
	I1008 14:53:54.567931  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.567940  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:54.567945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:54.568008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:54.594492  124886 cri.go:89] found id: ""
	I1008 14:53:54.594511  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.594522  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:54.594528  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:54.594583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:54.621305  124886 cri.go:89] found id: ""
	I1008 14:53:54.621321  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.621330  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:54.621338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:54.621348  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:54.648627  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:54.648645  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:54.717360  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:54.717382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:54.731905  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:54.731923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:54.788630  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:54.788640  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:54.788650  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.353718  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:57.365518  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:57.365570  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:57.391621  124886 cri.go:89] found id: ""
	I1008 14:53:57.391638  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.391646  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:57.391650  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:57.391704  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:57.419557  124886 cri.go:89] found id: ""
	I1008 14:53:57.419574  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.419582  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:57.419587  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:57.419643  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:57.447029  124886 cri.go:89] found id: ""
	I1008 14:53:57.447047  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.447059  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:57.447077  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:57.447126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:57.473391  124886 cri.go:89] found id: ""
	I1008 14:53:57.473410  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.473418  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:57.473423  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:57.473494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:57.499437  124886 cri.go:89] found id: ""
	I1008 14:53:57.499472  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.499481  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:57.499486  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:57.499542  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:57.525753  124886 cri.go:89] found id: ""
	I1008 14:53:57.525770  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.525776  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:57.525782  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:57.525827  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:57.555506  124886 cri.go:89] found id: ""
	I1008 14:53:57.555523  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.555529  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:57.555539  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:57.555553  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:57.623045  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:57.623068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:57.637620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:57.637638  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:57.695326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:57.695339  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:57.695356  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.755685  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:57.755710  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:00.285648  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:00.296554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:00.296603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:00.322379  124886 cri.go:89] found id: ""
	I1008 14:54:00.322396  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.322405  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:00.322409  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:00.322474  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:00.349397  124886 cri.go:89] found id: ""
	I1008 14:54:00.349414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.349423  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:00.349429  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:00.349507  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:00.375588  124886 cri.go:89] found id: ""
	I1008 14:54:00.375602  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.375608  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:00.375613  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:00.375670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:00.401398  124886 cri.go:89] found id: ""
	I1008 14:54:00.401414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.401420  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:00.401426  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:00.401494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:00.427652  124886 cri.go:89] found id: ""
	I1008 14:54:00.427668  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.427675  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:00.427680  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:00.427736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:00.451896  124886 cri.go:89] found id: ""
	I1008 14:54:00.451911  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.451918  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:00.451923  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:00.451967  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:00.478107  124886 cri.go:89] found id: ""
	I1008 14:54:00.478122  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.478128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:00.478135  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:00.478145  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:00.547950  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:00.547974  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:00.561968  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:00.561986  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:00.618117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:00.618131  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:00.618141  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:00.683464  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:00.683490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.211808  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:03.222618  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:03.222667  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:03.248716  124886 cri.go:89] found id: ""
	I1008 14:54:03.248732  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.248738  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:03.248742  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:03.248784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:03.275183  124886 cri.go:89] found id: ""
	I1008 14:54:03.275202  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.275209  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:03.275214  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:03.275262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:03.301882  124886 cri.go:89] found id: ""
	I1008 14:54:03.301909  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.301915  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:03.301920  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:03.301966  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:03.328783  124886 cri.go:89] found id: ""
	I1008 14:54:03.328799  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.328811  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:03.328817  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:03.328864  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:03.355235  124886 cri.go:89] found id: ""
	I1008 14:54:03.355251  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.355259  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:03.355268  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:03.355313  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:03.382286  124886 cri.go:89] found id: ""
	I1008 14:54:03.382305  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.382313  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:03.382318  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:03.382371  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:03.408682  124886 cri.go:89] found id: ""
	I1008 14:54:03.408700  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.408708  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:03.408718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:03.408732  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.438177  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:03.438196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:03.507859  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:03.507881  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:03.523723  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:03.523747  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:03.580407  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:03.580418  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:03.580430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.142863  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:06.153852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:06.153912  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:06.180234  124886 cri.go:89] found id: ""
	I1008 14:54:06.180253  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.180264  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:06.180271  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:06.180320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:06.207080  124886 cri.go:89] found id: ""
	I1008 14:54:06.207094  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.207101  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:06.207106  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:06.207152  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:06.232369  124886 cri.go:89] found id: ""
	I1008 14:54:06.232384  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.232390  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:06.232394  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:06.232438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:06.257360  124886 cri.go:89] found id: ""
	I1008 14:54:06.257376  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.257383  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:06.257388  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:06.257433  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:06.284487  124886 cri.go:89] found id: ""
	I1008 14:54:06.284507  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.284516  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:06.284523  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:06.284584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:06.310846  124886 cri.go:89] found id: ""
	I1008 14:54:06.310863  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.310874  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:06.310882  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:06.310935  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:06.337095  124886 cri.go:89] found id: ""
	I1008 14:54:06.337114  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.337121  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:06.337130  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:06.337142  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:06.406561  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:06.406591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:06.421066  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:06.421088  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:06.477926  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:06.477943  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:06.477957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.538516  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:06.538537  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:09.071758  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:09.082621  124886 kubeadm.go:601] duration metric: took 4m3.01446136s to restartPrimaryControlPlane
	W1008 14:54:09.082718  124886 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 14:54:09.082774  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:54:09.534098  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:54:09.546894  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:54:09.555065  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:54:09.555116  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:54:09.563122  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:54:09.563134  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:54:09.563181  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:54:09.571418  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:54:09.571492  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:54:09.579061  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:54:09.587199  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:54:09.587244  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:54:09.594420  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.602223  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:54:09.602263  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.609598  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:54:09.616978  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:54:09.617035  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:54:09.624225  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:54:09.679083  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:54:09.736432  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:58:12.118648  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:58:12.118737  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:58:12.121564  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:58:12.121611  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:58:12.121691  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:58:12.121739  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:58:12.121768  124886 kubeadm.go:318] OS: Linux
	I1008 14:58:12.121805  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:58:12.121846  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:58:12.121885  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:58:12.121936  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:58:12.121975  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:58:12.122056  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:58:12.122130  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:58:12.122194  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:58:12.122280  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:58:12.122381  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:58:12.122523  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:58:12.122608  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:58:12.124721  124886 out.go:252]   - Generating certificates and keys ...
	I1008 14:58:12.124815  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:58:12.124880  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:58:12.124964  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:58:12.125031  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:58:12.125148  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:58:12.125193  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:58:12.125282  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:58:12.125362  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:58:12.125490  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:58:12.125594  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:58:12.125626  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:58:12.125673  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:58:12.125714  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:58:12.125760  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:58:12.125802  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:58:12.125857  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:58:12.125902  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:58:12.125971  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:58:12.126032  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:58:12.128152  124886 out.go:252]   - Booting up control plane ...
	I1008 14:58:12.128237  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:58:12.128300  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:58:12.128371  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:58:12.128508  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:58:12.128583  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:58:12.128689  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:58:12.128762  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:58:12.128794  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:58:12.128904  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:58:12.128993  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:58:12.129038  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.0016053s
	I1008 14:58:12.129115  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:58:12.129187  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:58:12.129304  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:58:12.129408  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:58:12.129490  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	I1008 14:58:12.129546  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	I1008 14:58:12.129607  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	I1008 14:58:12.129609  124886 kubeadm.go:318] 
	I1008 14:58:12.129696  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:58:12.129765  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:58:12.129857  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:58:12.129935  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:58:12.129999  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:58:12.130073  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:58:12.130125  124886 kubeadm.go:318] 
	W1008 14:58:12.130230  124886 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.0016053s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:58:12.130328  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:58:12.582965  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:58:12.596265  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:58:12.596310  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:58:12.604829  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:58:12.604840  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:58:12.604880  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:58:12.613146  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:58:12.613253  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:58:12.621163  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:58:12.629390  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:58:12.629433  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:58:12.637274  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.645831  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:58:12.645886  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.653972  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:58:12.662348  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:58:12.662392  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:58:12.670230  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:58:12.730328  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:58:12.789898  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:02:14.463875  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 15:02:14.464082  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:02:14.466966  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:02:14.467026  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:02:14.467112  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:02:14.467156  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:02:14.467184  124886 kubeadm.go:318] OS: Linux
	I1008 15:02:14.467232  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:02:14.467270  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:02:14.467309  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:02:14.467348  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:02:14.467386  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:02:14.467424  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:02:14.467494  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:02:14.467536  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:02:14.467596  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:02:14.467693  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:02:14.467779  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:02:14.467827  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:02:14.470599  124886 out.go:252]   - Generating certificates and keys ...
	I1008 15:02:14.470674  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:02:14.470757  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:02:14.470867  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:02:14.470954  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:02:14.471017  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:02:14.471091  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:02:14.471148  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:02:14.471198  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:02:14.471289  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:02:14.471353  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:02:14.471382  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:02:14.471424  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:02:14.471487  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:02:14.471529  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:02:14.471569  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:02:14.471615  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:02:14.471657  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:02:14.471734  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:02:14.471802  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:02:14.473075  124886 out.go:252]   - Booting up control plane ...
	I1008 15:02:14.473133  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:02:14.473209  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:02:14.473257  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:02:14.473356  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:02:14.473436  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:02:14.473538  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:02:14.473606  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:02:14.473637  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:02:14.473747  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:02:14.473833  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:02:14.473877  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.93866ms
	I1008 15:02:14.473950  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:02:14.474013  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 15:02:14.474094  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:02:14.474159  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:02:14.474228  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	I1008 15:02:14.474292  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	I1008 15:02:14.474371  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	I1008 15:02:14.474380  124886 kubeadm.go:318] 
	I1008 15:02:14.474476  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:02:14.474542  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:02:14.474617  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:02:14.474713  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:02:14.474773  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:02:14.474854  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:02:14.474900  124886 kubeadm.go:318] 
	I1008 15:02:14.474937  124886 kubeadm.go:402] duration metric: took 12m8.444330692s to StartCluster
	I1008 15:02:14.474986  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:02:14.475048  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:02:14.503050  124886 cri.go:89] found id: ""
	I1008 15:02:14.503067  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.503076  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:02:14.503082  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:02:14.503136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:02:14.530120  124886 cri.go:89] found id: ""
	I1008 15:02:14.530138  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.530145  124886 logs.go:284] No container was found matching "etcd"
	I1008 15:02:14.530149  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:02:14.530200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:02:14.555892  124886 cri.go:89] found id: ""
	I1008 15:02:14.555909  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.555916  124886 logs.go:284] No container was found matching "coredns"
	I1008 15:02:14.555921  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:02:14.555972  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:02:14.583336  124886 cri.go:89] found id: ""
	I1008 15:02:14.583351  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.583358  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:02:14.583363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:02:14.583409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:02:14.611139  124886 cri.go:89] found id: ""
	I1008 15:02:14.611160  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.611169  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:02:14.611175  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:02:14.611227  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:02:14.639405  124886 cri.go:89] found id: ""
	I1008 15:02:14.639422  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.639429  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:02:14.639434  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:02:14.639496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:02:14.666049  124886 cri.go:89] found id: ""
	I1008 15:02:14.666066  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.666073  124886 logs.go:284] No container was found matching "kindnet"
	I1008 15:02:14.666082  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:02:14.666093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:02:14.729847  124886 logs.go:123] Gathering logs for container status ...
	I1008 15:02:14.729877  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 15:02:14.760743  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 15:02:14.760761  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:02:14.827532  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 15:02:14.827555  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:02:14.842256  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:02:14.842273  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:02:14.900360  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1008 15:02:14.900380  124886 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:02:14.900418  124886 out.go:285] * 
	W1008 15:02:14.900560  124886 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.900582  124886 out.go:285] * 
	W1008 15:02:14.902936  124886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:02:14.906609  124886 out.go:203] 
	W1008 15:02:14.908139  124886 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.908172  124886 out.go:285] * 
	I1008 15:02:14.910356  124886 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:26 functional-367186 crio[5841]: time="2025-10-08T15:02:26.243733746Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-367186_kube-system_72fbb4fed11a83b82d196f480544c561_0" id=41584af8-4d83-42f0-b872-9787ba77e7c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:26 functional-367186 crio[5841]: time="2025-10-08T15:02:26.271535387Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=55f33f00-5728-4ed8-be6c-a9ec224a210f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:26 functional-367186 crio[5841]: time="2025-10-08T15:02:26.271746765Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=55f33f00-5728-4ed8-be6c-a9ec224a210f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:26 functional-367186 crio[5841]: time="2025-10-08T15:02:26.27179419Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=55f33f00-5728-4ed8-be6c-a9ec224a210f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.372842235Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=ea944348-5dbb-4e71-a853-7ad558a40f65 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.405474368Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=89359438-7327-47f3-bcd0-f6d89f265ed1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.405657445Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=89359438-7327-47f3-bcd0-f6d89f265ed1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.405708575Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=89359438-7327-47f3-bcd0-f6d89f265ed1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.44053373Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=96c92364-ae8e-41e9-bba7-65d84911515b name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.440705601Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=96c92364-ae8e-41e9-bba7-65d84911515b name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:27 functional-367186 crio[5841]: time="2025-10-08T15:02:27.440763112Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=96c92364-ae8e-41e9-bba7-65d84911515b name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.025485551Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=a4f09100-a89a-48dc-89f1-535c556a80a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.052496663Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.052654742Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.05273608Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=33fdf296-abfb-40c0-9085-262b61c3d657 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.078814601Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.078975874Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.079026616Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=ba253fed-7439-49bd-bf21-d4470ca17274 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.876199233Z" level=info msg="Checking image status: kicbase/echo-server:functional-367186" id=bcb6792f-0817-4dec-aab1-936038b6e1e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.905821555Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-367186" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.905973538Z" level=info msg="Image docker.io/kicbase/echo-server:functional-367186 not found" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.906015096Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-367186 found" id=4ad688af-42ed-4632-82db-9177a0f4baf7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934168176Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-367186" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934313118Z" level=info msg="Image localhost/kicbase/echo-server:functional-367186 not found" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:29 functional-367186 crio[5841]: time="2025-10-08T15:02:29.934355764Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-367186 found" id=cd072555-4ae4-457f-a578-ffaed9689be2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:30.415907   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:30.416406   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:30.418160   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:30.418694   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:30.420337   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:30 up  2:45,  0 user,  load average: 1.35, 0.32, 0.31
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:24 functional-367186 kubelet[14967]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c9f63674abedb97e40dbf72720752d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:24 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.249120   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c9f63674abedb97e40dbf72720752d59"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.837341   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: I1008 15:02:25.003178   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.003818   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.211699   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.251886   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:25 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:25 functional-367186 kubelet[14967]:  > podSandboxID="49d755d590c1e6c75fffb26df4018ef3af1ece9b6aef63dbe754f59f467146f3"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.252026   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:25 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:25 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.252072   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.046948   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.212548   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244164   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:26 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:26 functional-367186 kubelet[14967]:  > podSandboxID="e484b96b426485f7bb73491a3eadb180f53489ac5744f9f22e7d4f5f26a4a47a"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244294   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:26 functional-367186 kubelet[14967]:         container kube-scheduler start failed in pod kube-scheduler-functional-367186_kube-system(72fbb4fed11a83b82d196f480544c561): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:26 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.244335   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-367186" podUID="72fbb4fed11a83b82d196f480544c561"
	Oct 08 15:02:29 functional-367186 kubelet[14967]: E1008 15:02:29.115019   14967 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 08 15:02:29 functional-367186 kubelet[14967]: E1008 15:02:29.438217   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (330.553846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-367186 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-367186 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (59.604281ms)

                                                
                                                
** stderr ** 
	E1008 15:02:24.681181  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.681822  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683324  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683641  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.685150  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-367186 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1008 15:02:24.681181  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.681822  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683324  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683641  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.685150  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1008 15:02:24.681181  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.681822  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683324  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683641  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.685150  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1008 15:02:24.681181  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.681822  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683324  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683641  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.685150  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1008 15:02:24.681181  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.681822  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683324  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683641  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.685150  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1008 15:02:24.681181  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.681822  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683324  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.683641  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:02:24.685150  142666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-367186
helpers_test.go:243: (dbg) docker inspect functional-367186:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	        "Created": "2025-10-08T14:35:27.530156109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 113563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T14:35:27.569039337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/hosts",
	        "LogPath": "/var/lib/docker/containers/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b/497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b-json.log",
	        "Name": "/functional-367186",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-367186:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-367186",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "497c88611e8c203c605884ef00f78ea798dee714b08ac56b445b43e9a8fa8b4b",
	                "LowerDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d801ccffdd6c712f486f93e3d57d6fc518bcf56835aaed89f3649a7d87416107/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-367186",
	                "Source": "/var/lib/docker/volumes/functional-367186/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-367186",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-367186",
	                "name.minikube.sigs.k8s.io": "functional-367186",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14503b071201cbc3d55d68db02b63e2831b36c0f42b7aa2184f5029c8ac3a930",
	            "SandboxKey": "/var/run/docker/netns/14503b071201",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-367186": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:89:68:ed:86:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "995309a8f87676622a2ff8cec956422d0e136ecb449e90e9ba136678a4653143",
	                    "EndpointID": "ab431b3c0ebe863dabf66794b1aec5f4534a1134c842c55c90c2854c51db7469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-367186",
	                        "497c88611e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-367186 -n functional-367186: exit status 2 (338.213307ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-367186 logs -n 25: (1.158367239s)
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config  │ functional-367186 config set cpus 2                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config get cpus                                                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config unset cpus                                                                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh -n functional-367186 sudo cat /home/docker/cp-test.txt                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh cat /etc/hostname                                                                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ config  │ functional-367186 config get cpus                                                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ service │ functional-367186 service list -o json                                                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ cp      │ functional-367186 cp functional-367186:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2365031035/001/cp-test.txt │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ service │ functional-367186 service --namespace=default --https --url hello-node                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh -n functional-367186 sudo cat /home/docker/cp-test.txt                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ service │ functional-367186 service hello-node --url --format={{.IP}}                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ service │ functional-367186 service hello-node --url                                                                                 │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ cp      │ functional-367186 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh -n functional-367186 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh findmnt -T /mount-9p | grep 9p                                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ mount   │ -p functional-367186 /tmp/TestFunctionalparallelMountCmdany-port2779261458/001:/mount-9p --alsologtostderr -v=1            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ license │                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo systemctl is-active docker                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh findmnt -T /mount-9p | grep 9p                                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo systemctl is-active containerd                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh -- ls -la /mount-9p                                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh cat /mount-9p/test-1759935742667385576                                                               │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ ssh     │ functional-367186 ssh sudo umount -f /mount-9p                                                                             │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh sudo cat /etc/ssl/certs/98900.pem                                                                    │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:50:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:50:02.487614  124886 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:50:02.487885  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.487890  124886 out.go:374] Setting ErrFile to fd 2...
	I1008 14:50:02.487894  124886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:02.488148  124886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:50:02.488703  124886 out.go:368] Setting JSON to false
	I1008 14:50:02.489732  124886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9153,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:50:02.489824  124886 start.go:141] virtualization: kvm guest
	I1008 14:50:02.491855  124886 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:50:02.493271  124886 notify.go:220] Checking for updates...
	I1008 14:50:02.493279  124886 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:50:02.494598  124886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:50:02.495836  124886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:50:02.497242  124886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:50:02.498624  124886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:50:02.499973  124886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:50:02.501897  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:02.502018  124886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:50:02.525193  124886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:50:02.525315  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.584022  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.573926988 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.584110  124886 docker.go:318] overlay module found
	I1008 14:50:02.585968  124886 out.go:179] * Using the docker driver based on existing profile
	I1008 14:50:02.587279  124886 start.go:305] selected driver: docker
	I1008 14:50:02.587288  124886 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.587409  124886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:50:02.587529  124886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:50:02.641632  124886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-08 14:50:02.631975419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:50:02.642294  124886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:50:02.642317  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:02.642374  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:02.642409  124886 start.go:349] cluster config:
	{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:02.644427  124886 out.go:179] * Starting "functional-367186" primary control-plane node in "functional-367186" cluster
	I1008 14:50:02.645877  124886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:50:02.647092  124886 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:50:02.648224  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:02.648254  124886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:50:02.648262  124886 cache.go:58] Caching tarball of preloaded images
	I1008 14:50:02.648344  124886 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:50:02.648340  124886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:50:02.648350  124886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:50:02.648438  124886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/config.json ...
	I1008 14:50:02.667989  124886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 14:50:02.668000  124886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 14:50:02.668014  124886 cache.go:232] Successfully downloaded all kic artifacts
	I1008 14:50:02.668041  124886 start.go:360] acquireMachinesLock for functional-367186: {Name:mk99c5a454ce600f0d10ac0def87c1541bf3bc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:50:02.668096  124886 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "functional-367186"
	I1008 14:50:02.668109  124886 start.go:96] Skipping create...Using existing machine configuration
	I1008 14:50:02.668113  124886 fix.go:54] fixHost starting: 
	I1008 14:50:02.668337  124886 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 14:50:02.684543  124886 fix.go:112] recreateIfNeeded on functional-367186: state=Running err=<nil>
	W1008 14:50:02.684562  124886 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 14:50:02.686414  124886 out.go:252] * Updating the running docker "functional-367186" container ...
	I1008 14:50:02.686441  124886 machine.go:93] provisionDockerMachine start ...
	I1008 14:50:02.686552  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.704251  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.704482  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.704488  124886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 14:50:02.850612  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:02.850631  124886 ubuntu.go:182] provisioning hostname "functional-367186"
	I1008 14:50:02.850683  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:02.868208  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:02.868417  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:02.868424  124886 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-367186 && echo "functional-367186" | sudo tee /etc/hostname
	I1008 14:50:03.024186  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-367186
	
	I1008 14:50:03.024255  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.041071  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.041277  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.041288  124886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-367186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-367186/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-367186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:50:03.186253  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:50:03.186270  124886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 14:50:03.186287  124886 ubuntu.go:190] setting up certificates
	I1008 14:50:03.186296  124886 provision.go:84] configureAuth start
	I1008 14:50:03.186366  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:03.203498  124886 provision.go:143] copyHostCerts
	I1008 14:50:03.203554  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 14:50:03.203567  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 14:50:03.203633  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 14:50:03.203728  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 14:50:03.203738  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 14:50:03.203764  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 14:50:03.203811  124886 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 14:50:03.203815  124886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 14:50:03.203835  124886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 14:50:03.203891  124886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.functional-367186 san=[127.0.0.1 192.168.49.2 functional-367186 localhost minikube]
	I1008 14:50:03.342698  124886 provision.go:177] copyRemoteCerts
	I1008 14:50:03.342747  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:50:03.342789  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.359931  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.462754  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 14:50:03.480100  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 14:50:03.497218  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 14:50:03.514367  124886 provision.go:87] duration metric: took 328.059175ms to configureAuth
	I1008 14:50:03.514387  124886 ubuntu.go:206] setting minikube options for container-runtime
	I1008 14:50:03.514597  124886 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:03.514714  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.531920  124886 main.go:141] libmachine: Using SSH client type: native
	I1008 14:50:03.532136  124886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 14:50:03.532149  124886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:50:03.804333  124886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:50:03.804348  124886 machine.go:96] duration metric: took 1.117888769s to provisionDockerMachine
	I1008 14:50:03.804358  124886 start.go:293] postStartSetup for "functional-367186" (driver="docker")
	I1008 14:50:03.804366  124886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:50:03.804425  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:50:03.804490  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.822222  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:03.925021  124886 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:50:03.928570  124886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 14:50:03.928586  124886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 14:50:03.928595  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 14:50:03.928648  124886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 14:50:03.928714  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 14:50:03.928776  124886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts -> hosts in /etc/test/nested/copy/98900
	I1008 14:50:03.928851  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/98900
	I1008 14:50:03.936383  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:03.953682  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts --> /etc/test/nested/copy/98900/hosts (40 bytes)
	I1008 14:50:03.970665  124886 start.go:296] duration metric: took 166.291312ms for postStartSetup
	I1008 14:50:03.970729  124886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:03.970760  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:03.987625  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.086669  124886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 14:50:04.091298  124886 fix.go:56] duration metric: took 1.423178254s for fixHost
	I1008 14:50:04.091311  124886 start.go:83] releasing machines lock for "functional-367186", held for 1.423209484s
	I1008 14:50:04.091360  124886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-367186
	I1008 14:50:04.107787  124886 ssh_runner.go:195] Run: cat /version.json
	I1008 14:50:04.107823  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.107871  124886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:50:04.107944  124886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 14:50:04.125505  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.126027  124886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 14:50:04.277012  124886 ssh_runner.go:195] Run: systemctl --version
	I1008 14:50:04.283607  124886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:50:04.317281  124886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:50:04.322127  124886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:50:04.322186  124886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:50:04.329933  124886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 14:50:04.329948  124886 start.go:495] detecting cgroup driver to use...
	I1008 14:50:04.329985  124886 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 14:50:04.330037  124886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:50:04.344088  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:50:04.355897  124886 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:50:04.355934  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:50:04.370666  124886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:50:04.383061  124886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:50:04.469185  124886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:50:04.555865  124886 docker.go:234] disabling docker service ...
	I1008 14:50:04.555933  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:50:04.571649  124886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:50:04.585004  124886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:50:04.673830  124886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:50:04.762936  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:50:04.775689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:50:04.790127  124886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:50:04.790172  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.799414  124886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 14:50:04.799484  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.808366  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.816703  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.825175  124886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:50:04.833160  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.842121  124886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.850355  124886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:50:04.859028  124886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:50:04.866049  124886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:50:04.873109  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:04.955543  124886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:50:05.069798  124886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:50:05.069856  124886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:50:05.074109  124886 start.go:563] Will wait 60s for crictl version
	I1008 14:50:05.074171  124886 ssh_runner.go:195] Run: which crictl
	I1008 14:50:05.077741  124886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 14:50:05.103519  124886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 14:50:05.103581  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.131061  124886 ssh_runner.go:195] Run: crio --version
	I1008 14:50:05.160549  124886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 14:50:05.161770  124886 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 14:50:05.178428  124886 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 14:50:05.184282  124886 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1008 14:50:05.185372  124886 kubeadm.go:883] updating cluster {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:50:05.185532  124886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:50:05.185581  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.219145  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.219157  124886 crio.go:433] Images already preloaded, skipping extraction
	I1008 14:50:05.219203  124886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:50:05.244747  124886 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:50:05.244760  124886 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:50:05.244766  124886 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 14:50:05.244868  124886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-367186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:50:05.244932  124886 ssh_runner.go:195] Run: crio config
	I1008 14:50:05.290552  124886 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1008 14:50:05.290627  124886 cni.go:84] Creating CNI manager for ""
	I1008 14:50:05.290634  124886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:50:05.290643  124886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:50:05.290661  124886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-367186 NodeName:functional-367186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:50:05.290774  124886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-367186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:50:05.290829  124886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:50:05.299112  124886 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:50:05.299181  124886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:50:05.307519  124886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 14:50:05.319796  124886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:50:05.331988  124886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1008 14:50:05.344225  124886 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 14:50:05.347910  124886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:50:05.434760  124886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:50:05.447481  124886 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186 for IP: 192.168.49.2
	I1008 14:50:05.447496  124886 certs.go:195] generating shared ca certs ...
	I1008 14:50:05.447517  124886 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:50:05.447665  124886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 14:50:05.447699  124886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 14:50:05.447705  124886 certs.go:257] generating profile certs ...
	I1008 14:50:05.447783  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.key
	I1008 14:50:05.447822  124886 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key.36811b31
	I1008 14:50:05.447852  124886 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key
	I1008 14:50:05.447956  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 14:50:05.447979  124886 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 14:50:05.447984  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 14:50:05.448004  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 14:50:05.448022  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:50:05.448039  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 14:50:05.448072  124886 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 14:50:05.448723  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:50:05.466280  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:50:05.482753  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:50:05.499451  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:50:05.516010  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 14:50:05.532903  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:50:05.549460  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:50:05.566552  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:50:05.584248  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:50:05.601250  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 14:50:05.618600  124886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 14:50:05.636280  124886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:50:05.648959  124886 ssh_runner.go:195] Run: openssl version
	I1008 14:50:05.655372  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:50:05.664552  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668508  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.668554  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:50:05.702319  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:50:05.710597  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 14:50:05.719238  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722899  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.722944  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 14:50:05.756814  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 14:50:05.765232  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 14:50:05.773915  124886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777582  124886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.777627  124886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 14:50:05.811974  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 14:50:05.820369  124886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:50:05.824309  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 14:50:05.858210  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 14:50:05.892122  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 14:50:05.926997  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 14:50:05.961508  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 14:50:05.996031  124886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 14:50:06.030615  124886 kubeadm.go:400] StartCluster: {Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:50:06.030703  124886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:50:06.030782  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.059591  124886 cri.go:89] found id: ""
	I1008 14:50:06.059641  124886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:50:06.068127  124886 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 14:50:06.068151  124886 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 14:50:06.068205  124886 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 14:50:06.076226  124886 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.076725  124886 kubeconfig.go:125] found "functional-367186" server: "https://192.168.49.2:8441"
	I1008 14:50:06.077896  124886 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 14:50:06.086029  124886 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-08 14:35:34.873718023 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-08 14:50:05.341579042 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1008 14:50:06.086044  124886 kubeadm.go:1160] stopping kube-system containers ...
	I1008 14:50:06.086056  124886 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 14:50:06.086094  124886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:50:06.113178  124886 cri.go:89] found id: ""
	I1008 14:50:06.113245  124886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 14:50:06.155234  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:50:06.163592  124886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  8 14:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  8 14:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Oct  8 14:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  8 14:39 /etc/kubernetes/scheduler.conf
	
	I1008 14:50:06.163642  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:50:06.171483  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:50:06.179293  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.179397  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:50:06.186779  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.194154  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.194203  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:50:06.201651  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:50:06.209487  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:06.209530  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:50:06.217108  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:50:06.224828  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:06.265674  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.277477  124886 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011762147s)
	I1008 14:50:07.277533  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.443820  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.494457  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 14:50:07.547380  124886 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:50:07.547460  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.047610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:08.547636  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.047603  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:09.548254  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.047862  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:10.548513  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:11.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.048225  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:12.548074  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.048566  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:14.548179  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.047805  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:15.548258  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.048373  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:16.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.047544  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:17.548496  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.048492  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:18.548115  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.047640  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:19.548277  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.047671  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:20.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.048049  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:21.547809  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:22.548203  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.047855  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:23.547915  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.048015  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:24.547746  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.048353  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:25.548289  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.048071  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:26.547643  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.047912  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:27.548519  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.047801  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:28.547748  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.048322  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:29.548153  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.047657  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:30.547721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.047652  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:31.548219  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.047871  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:32.548380  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.047959  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:33.548581  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.047957  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:34.547650  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.048117  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:35.547561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.048296  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:36.547881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.047870  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:37.548272  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.047689  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:38.548487  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.047562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:39.547999  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.048398  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:40.547939  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.048434  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:41.547918  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.048433  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:42.548054  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.048329  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:43.548100  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.047697  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:44.548386  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.047561  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:45.548546  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.048286  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:46.547793  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.048077  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:47.547717  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.048220  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:48.548251  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.047634  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:49.548172  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.048591  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:50.548428  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.048515  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:51.547901  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.048572  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:52.548237  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.047859  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:53.548570  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.047742  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:54.548274  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.047802  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:55.548510  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.047998  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:56.547560  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.047723  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:57.547955  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.048562  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:58.547549  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.047984  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:59.547945  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.048426  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:00.547582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.047615  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:01.548404  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.048058  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:02.548196  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.048582  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:03.548046  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.047563  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:04.548551  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.047699  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:05.547610  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.048374  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:06.548211  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.048533  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:07.548306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:07.548386  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:07.574942  124886 cri.go:89] found id: ""
	I1008 14:51:07.574974  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.574982  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:07.574988  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:07.575052  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:07.600942  124886 cri.go:89] found id: ""
	I1008 14:51:07.600957  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.600964  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:07.600968  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:07.601020  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:07.627307  124886 cri.go:89] found id: ""
	I1008 14:51:07.627324  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.627331  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:07.627336  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:07.627388  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:07.653908  124886 cri.go:89] found id: ""
	I1008 14:51:07.653925  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.653933  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:07.653938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:07.653988  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:07.681787  124886 cri.go:89] found id: ""
	I1008 14:51:07.681806  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.681814  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:07.681818  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:07.681881  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:07.707870  124886 cri.go:89] found id: ""
	I1008 14:51:07.707886  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.707892  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:07.707898  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:07.707955  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:07.734640  124886 cri.go:89] found id: ""
	I1008 14:51:07.734655  124886 logs.go:282] 0 containers: []
	W1008 14:51:07.734662  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:07.734673  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:07.734682  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:07.804699  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:07.804721  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:07.819273  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:07.819290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:07.875686  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:07.868493    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.869102    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.870733    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.871218    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:07.872852    6714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:07.875696  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:07.875709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:07.940091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:07.940122  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:10.470645  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:10.481694  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:10.481739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:10.506817  124886 cri.go:89] found id: ""
	I1008 14:51:10.506832  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.506839  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:10.506843  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:10.506898  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:10.531484  124886 cri.go:89] found id: ""
	I1008 14:51:10.531499  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.531506  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:10.531511  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:10.531558  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:10.557249  124886 cri.go:89] found id: ""
	I1008 14:51:10.557268  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.557277  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:10.557282  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:10.557333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:10.582779  124886 cri.go:89] found id: ""
	I1008 14:51:10.582797  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.582833  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:10.582838  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:10.582908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:10.608584  124886 cri.go:89] found id: ""
	I1008 14:51:10.608599  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.608606  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:10.608610  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:10.608653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:10.634540  124886 cri.go:89] found id: ""
	I1008 14:51:10.634557  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.634567  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:10.634573  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:10.634635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:10.659510  124886 cri.go:89] found id: ""
	I1008 14:51:10.659526  124886 logs.go:282] 0 containers: []
	W1008 14:51:10.659532  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:10.659541  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:10.659552  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:10.727322  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:10.727344  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:10.741862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:10.741882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:10.798339  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:10.791238    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.791673    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793256    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.793839    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:10.795382    6846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:10.798350  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:10.798362  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:10.862340  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:10.862363  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.392975  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:13.404098  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:13.404165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:13.430215  124886 cri.go:89] found id: ""
	I1008 14:51:13.430231  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.430237  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:13.430242  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:13.430283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:13.455821  124886 cri.go:89] found id: ""
	I1008 14:51:13.455837  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.455844  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:13.455853  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:13.455903  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:13.482279  124886 cri.go:89] found id: ""
	I1008 14:51:13.482296  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.482316  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:13.482321  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:13.482366  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:13.508868  124886 cri.go:89] found id: ""
	I1008 14:51:13.508883  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.508893  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:13.508900  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:13.508957  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:13.534938  124886 cri.go:89] found id: ""
	I1008 14:51:13.534954  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.534960  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:13.534964  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:13.535012  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:13.562594  124886 cri.go:89] found id: ""
	I1008 14:51:13.562611  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.562620  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:13.562626  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:13.562683  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:13.588476  124886 cri.go:89] found id: ""
	I1008 14:51:13.588493  124886 logs.go:282] 0 containers: []
	W1008 14:51:13.588505  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:13.588513  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:13.588522  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:13.617969  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:13.617996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:13.687989  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:13.688010  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:13.702556  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:13.702577  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:13.758238  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:13.751363    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.751919    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753529    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.753967    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:13.755520    6978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:13.758274  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:13.758288  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.324420  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:16.335355  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:16.335413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:16.361211  124886 cri.go:89] found id: ""
	I1008 14:51:16.361227  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.361233  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:16.361238  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:16.361283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:16.388154  124886 cri.go:89] found id: ""
	I1008 14:51:16.388170  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.388176  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:16.388180  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:16.388234  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:16.414515  124886 cri.go:89] found id: ""
	I1008 14:51:16.414532  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.414539  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:16.414545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:16.414606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:16.441112  124886 cri.go:89] found id: ""
	I1008 14:51:16.441130  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.441137  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:16.441143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:16.441196  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:16.467403  124886 cri.go:89] found id: ""
	I1008 14:51:16.467423  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.467434  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:16.467439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:16.467515  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:16.493912  124886 cri.go:89] found id: ""
	I1008 14:51:16.493994  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.494017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:16.494025  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:16.494086  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:16.520736  124886 cri.go:89] found id: ""
	I1008 14:51:16.520754  124886 logs.go:282] 0 containers: []
	W1008 14:51:16.520761  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:16.520770  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:16.520784  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:16.578205  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:16.570978    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.571661    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573304    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.573783    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:16.575330    7079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:16.578222  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:16.578237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:16.641639  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:16.641661  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:16.671073  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:16.671090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:16.740879  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:16.740901  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.256721  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:19.267621  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:19.267671  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:19.293587  124886 cri.go:89] found id: ""
	I1008 14:51:19.293605  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.293611  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:19.293616  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:19.293661  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:19.318866  124886 cri.go:89] found id: ""
	I1008 14:51:19.318886  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.318898  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:19.318905  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:19.318973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:19.344646  124886 cri.go:89] found id: ""
	I1008 14:51:19.344660  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.344668  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:19.344673  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:19.344730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:19.370979  124886 cri.go:89] found id: ""
	I1008 14:51:19.370994  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.371001  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:19.371006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:19.371049  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:19.398115  124886 cri.go:89] found id: ""
	I1008 14:51:19.398134  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.398144  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:19.398149  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:19.398205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:19.425579  124886 cri.go:89] found id: ""
	I1008 14:51:19.425594  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.425602  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:19.425606  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:19.425664  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:19.451179  124886 cri.go:89] found id: ""
	I1008 14:51:19.451194  124886 logs.go:282] 0 containers: []
	W1008 14:51:19.451201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:19.451209  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:19.451219  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:19.515409  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:19.515430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:19.530193  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:19.530208  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:19.587513  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:19.580627    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.581195    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.582742    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.583212    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:19.584854    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:19.587527  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:19.587538  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:19.650244  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:19.650266  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:22.181221  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:22.192437  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:22.192530  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:22.218691  124886 cri.go:89] found id: ""
	I1008 14:51:22.218709  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.218717  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:22.218722  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:22.218784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:22.245011  124886 cri.go:89] found id: ""
	I1008 14:51:22.245028  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.245035  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:22.245040  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:22.245087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:22.271669  124886 cri.go:89] found id: ""
	I1008 14:51:22.271698  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.271706  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:22.271710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:22.271775  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:22.298500  124886 cri.go:89] found id: ""
	I1008 14:51:22.298520  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.298529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:22.298537  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:22.298598  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:22.324858  124886 cri.go:89] found id: ""
	I1008 14:51:22.324873  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.324879  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:22.324883  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:22.324930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:22.351540  124886 cri.go:89] found id: ""
	I1008 14:51:22.351556  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.351563  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:22.351568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:22.351613  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:22.377421  124886 cri.go:89] found id: ""
	I1008 14:51:22.377458  124886 logs.go:282] 0 containers: []
	W1008 14:51:22.377470  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:22.377482  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:22.377497  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:22.450410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:22.450465  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:22.465230  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:22.465257  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:22.521387  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:22.514495    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.515106    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.516661    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.517107    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:22.518666    7326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:22.521398  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:22.521409  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:22.586462  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:22.586490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.117667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:25.129264  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:25.129309  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:25.155977  124886 cri.go:89] found id: ""
	I1008 14:51:25.155998  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.156007  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:25.156016  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:25.156090  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:25.183268  124886 cri.go:89] found id: ""
	I1008 14:51:25.183288  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.183297  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:25.183302  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:25.183355  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:25.209728  124886 cri.go:89] found id: ""
	I1008 14:51:25.209745  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.209752  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:25.209763  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:25.209807  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:25.236946  124886 cri.go:89] found id: ""
	I1008 14:51:25.236961  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.236968  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:25.236974  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:25.237017  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:25.263116  124886 cri.go:89] found id: ""
	I1008 14:51:25.263132  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.263138  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:25.263143  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:25.263189  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:25.288378  124886 cri.go:89] found id: ""
	I1008 14:51:25.288395  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.288401  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:25.288406  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:25.288460  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:25.315195  124886 cri.go:89] found id: ""
	I1008 14:51:25.315210  124886 logs.go:282] 0 containers: []
	W1008 14:51:25.315217  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:25.315225  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:25.315237  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:25.371376  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:25.364155    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.364704    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366247    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.366650    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:25.368186    7445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:25.371387  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:25.371396  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:25.435272  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:25.435294  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:25.465980  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:25.465996  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:25.535450  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:25.535477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.050276  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:28.061620  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:28.061668  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:28.088245  124886 cri.go:89] found id: ""
	I1008 14:51:28.088265  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.088274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:28.088278  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:28.088326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:28.113839  124886 cri.go:89] found id: ""
	I1008 14:51:28.113859  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.113870  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:28.113876  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:28.113940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:28.141395  124886 cri.go:89] found id: ""
	I1008 14:51:28.141414  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.141423  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:28.141429  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:28.141503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:28.168333  124886 cri.go:89] found id: ""
	I1008 14:51:28.168348  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.168354  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:28.168360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:28.168413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:28.192847  124886 cri.go:89] found id: ""
	I1008 14:51:28.192864  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.192870  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:28.192876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:28.192936  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:28.218780  124886 cri.go:89] found id: ""
	I1008 14:51:28.218795  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.218801  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:28.218806  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:28.218875  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:28.244592  124886 cri.go:89] found id: ""
	I1008 14:51:28.244612  124886 logs.go:282] 0 containers: []
	W1008 14:51:28.244622  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:28.244631  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:28.244643  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:28.315714  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:28.315736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:28.329938  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:28.329954  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:28.387618  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:28.380231    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.380863    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.382503    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.383078    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:28.384818    7576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:28.387629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:28.387641  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:28.453202  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:28.453224  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:30.984664  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:30.995891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:30.995939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:31.022304  124886 cri.go:89] found id: ""
	I1008 14:51:31.022328  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.022338  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:31.022344  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:31.022401  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:31.049041  124886 cri.go:89] found id: ""
	I1008 14:51:31.049060  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.049069  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:31.049075  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:31.049123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:31.076924  124886 cri.go:89] found id: ""
	I1008 14:51:31.076940  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.076949  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:31.076953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:31.077003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:31.102922  124886 cri.go:89] found id: ""
	I1008 14:51:31.102942  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.102950  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:31.102955  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:31.103003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:31.131223  124886 cri.go:89] found id: ""
	I1008 14:51:31.131237  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.131244  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:31.131248  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:31.131294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:31.157335  124886 cri.go:89] found id: ""
	I1008 14:51:31.157350  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.157356  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:31.157361  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:31.157403  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:31.183539  124886 cri.go:89] found id: ""
	I1008 14:51:31.183556  124886 logs.go:282] 0 containers: []
	W1008 14:51:31.183563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:31.183571  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:31.183582  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:31.254970  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:31.254991  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:31.269535  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:31.269556  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:31.325660  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:31.318543    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.319017    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.320533    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.321008    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:31.322616    7697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:31.325690  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:31.325702  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:31.390180  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:31.390201  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:33.920121  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:33.931525  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:33.931580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:33.956578  124886 cri.go:89] found id: ""
	I1008 14:51:33.956594  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.956601  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:33.956606  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:33.956652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:33.983065  124886 cri.go:89] found id: ""
	I1008 14:51:33.983083  124886 logs.go:282] 0 containers: []
	W1008 14:51:33.983094  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:33.983100  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:33.983176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:34.009180  124886 cri.go:89] found id: ""
	I1008 14:51:34.009198  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.009206  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:34.009211  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:34.009266  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:34.035120  124886 cri.go:89] found id: ""
	I1008 14:51:34.035138  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.035145  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:34.035151  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:34.035207  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:34.060490  124886 cri.go:89] found id: ""
	I1008 14:51:34.060506  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.060512  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:34.060517  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:34.060565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:34.086320  124886 cri.go:89] found id: ""
	I1008 14:51:34.086338  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.086346  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:34.086351  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:34.086394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:34.111862  124886 cri.go:89] found id: ""
	I1008 14:51:34.111883  124886 logs.go:282] 0 containers: []
	W1008 14:51:34.111893  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:34.111902  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:34.111921  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:34.181743  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:34.181765  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:34.196152  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:34.196171  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:34.252034  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:34.245349    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.245854    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247381    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.247781    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:34.249319    7813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:34.252045  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:34.252056  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:34.316760  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:34.316781  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:36.845595  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:36.856603  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:36.856648  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:36.883175  124886 cri.go:89] found id: ""
	I1008 14:51:36.883194  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.883202  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:36.883209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:36.883267  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:36.910081  124886 cri.go:89] found id: ""
	I1008 14:51:36.910096  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.910103  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:36.910107  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:36.910157  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:36.935036  124886 cri.go:89] found id: ""
	I1008 14:51:36.935051  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.935062  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:36.935068  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:36.935122  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:36.961981  124886 cri.go:89] found id: ""
	I1008 14:51:36.961998  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.962009  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:36.962016  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:36.962126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:36.989270  124886 cri.go:89] found id: ""
	I1008 14:51:36.989290  124886 logs.go:282] 0 containers: []
	W1008 14:51:36.989299  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:36.989306  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:36.989363  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:37.016135  124886 cri.go:89] found id: ""
	I1008 14:51:37.016153  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.016161  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:37.016165  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:37.016215  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:37.043172  124886 cri.go:89] found id: ""
	I1008 14:51:37.043191  124886 logs.go:282] 0 containers: []
	W1008 14:51:37.043201  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:37.043211  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:37.043227  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:37.100326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:37.093324    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.093924    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095526    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.095946    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:37.097376    7933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:37.100338  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:37.100351  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:37.163756  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:37.163777  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:37.193435  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:37.193471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:37.260908  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:37.260933  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:39.777967  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:39.789007  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:39.789059  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:39.815862  124886 cri.go:89] found id: ""
	I1008 14:51:39.815879  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.815886  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:39.815890  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:39.815942  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:39.841950  124886 cri.go:89] found id: ""
	I1008 14:51:39.841966  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.841973  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:39.841979  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:39.842039  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:39.868668  124886 cri.go:89] found id: ""
	I1008 14:51:39.868686  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.868696  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:39.868702  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:39.868755  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:39.895534  124886 cri.go:89] found id: ""
	I1008 14:51:39.895554  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.895564  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:39.895571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:39.895622  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:39.922579  124886 cri.go:89] found id: ""
	I1008 14:51:39.922598  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.922608  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:39.922614  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:39.922660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:39.948340  124886 cri.go:89] found id: ""
	I1008 14:51:39.948356  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.948363  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:39.948367  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:39.948410  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:39.975730  124886 cri.go:89] found id: ""
	I1008 14:51:39.975746  124886 logs.go:282] 0 containers: []
	W1008 14:51:39.975752  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:39.975761  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:39.975771  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:40.004995  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:40.005014  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:40.075523  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:40.075546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:40.090104  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:40.090120  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:40.147226  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:40.140619    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.141171    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.142760    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.143237    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:40.144514    8076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:40.147238  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:40.147253  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:42.711983  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:42.723356  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:42.723413  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:42.749822  124886 cri.go:89] found id: ""
	I1008 14:51:42.749838  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.749844  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:42.749849  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:42.749917  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:42.776397  124886 cri.go:89] found id: ""
	I1008 14:51:42.776414  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.776421  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:42.776425  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:42.776493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:42.802489  124886 cri.go:89] found id: ""
	I1008 14:51:42.802508  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.802518  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:42.802524  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:42.802572  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:42.829172  124886 cri.go:89] found id: ""
	I1008 14:51:42.829187  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.829193  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:42.829198  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:42.829251  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:42.853534  124886 cri.go:89] found id: ""
	I1008 14:51:42.853552  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.853561  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:42.853568  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:42.853635  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:42.879567  124886 cri.go:89] found id: ""
	I1008 14:51:42.879583  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.879595  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:42.879601  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:42.879652  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:42.904961  124886 cri.go:89] found id: ""
	I1008 14:51:42.904979  124886 logs.go:282] 0 containers: []
	W1008 14:51:42.904986  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:42.904993  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:42.905009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:42.974363  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:42.974384  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:42.989172  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:42.989192  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:43.045247  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:43.037845    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.038365    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.039958    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.040556    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:43.042200    8184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:43.045260  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:43.045275  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:43.106406  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:43.106429  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:45.637311  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:45.648040  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:45.648095  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:45.673462  124886 cri.go:89] found id: ""
	I1008 14:51:45.673481  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.673491  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:45.673497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:45.673550  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:45.698163  124886 cri.go:89] found id: ""
	I1008 14:51:45.698181  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.698188  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:45.698193  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:45.698246  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:45.723467  124886 cri.go:89] found id: ""
	I1008 14:51:45.723561  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.723573  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:45.723581  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:45.723641  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:45.748702  124886 cri.go:89] found id: ""
	I1008 14:51:45.748717  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.748726  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:45.748732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:45.748796  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:45.775585  124886 cri.go:89] found id: ""
	I1008 14:51:45.775604  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.775612  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:45.775617  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:45.775670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:45.801010  124886 cri.go:89] found id: ""
	I1008 14:51:45.801025  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.801031  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:45.801036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:45.801084  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:45.827042  124886 cri.go:89] found id: ""
	I1008 14:51:45.827059  124886 logs.go:282] 0 containers: []
	W1008 14:51:45.827067  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:45.827075  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:45.827086  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:45.895458  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:45.895480  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:45.910085  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:45.910109  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:45.966571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:45.959330    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.959887    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961512    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.961974    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:45.963526    8303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:45.966593  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:45.966605  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:46.027581  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:46.027606  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:48.557168  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:48.568079  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:48.568130  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:48.594574  124886 cri.go:89] found id: ""
	I1008 14:51:48.594594  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.594603  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:48.594609  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:48.594653  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:48.621962  124886 cri.go:89] found id: ""
	I1008 14:51:48.621977  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.621984  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:48.621989  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:48.622035  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:48.648065  124886 cri.go:89] found id: ""
	I1008 14:51:48.648080  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.648087  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:48.648091  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:48.648146  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:48.675285  124886 cri.go:89] found id: ""
	I1008 14:51:48.675300  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.675307  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:48.675311  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:48.675356  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:48.701191  124886 cri.go:89] found id: ""
	I1008 14:51:48.701210  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.701218  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:48.701225  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:48.701271  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:48.729042  124886 cri.go:89] found id: ""
	I1008 14:51:48.729069  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.729079  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:48.729086  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:48.729136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:48.754548  124886 cri.go:89] found id: ""
	I1008 14:51:48.754564  124886 logs.go:282] 0 containers: []
	W1008 14:51:48.754572  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:48.754580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:48.754590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:48.822673  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:48.822705  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:48.836997  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:48.837017  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:48.894196  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:48.886898    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.887478    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889023    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.889438    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:48.890944    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:48.894212  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:48.894223  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:48.955101  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:48.955127  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.487365  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:51.498554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:51.498603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:51.525066  124886 cri.go:89] found id: ""
	I1008 14:51:51.525081  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.525088  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:51.525094  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:51.525147  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:51.550909  124886 cri.go:89] found id: ""
	I1008 14:51:51.550926  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.550933  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:51.550938  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:51.550989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:51.576844  124886 cri.go:89] found id: ""
	I1008 14:51:51.576860  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.576867  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:51.576871  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:51.576919  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:51.603876  124886 cri.go:89] found id: ""
	I1008 14:51:51.603894  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.603900  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:51.603907  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:51.603958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:51.630518  124886 cri.go:89] found id: ""
	I1008 14:51:51.630533  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.630540  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:51.630545  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:51.630591  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:51.656592  124886 cri.go:89] found id: ""
	I1008 14:51:51.656625  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.656634  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:51.656641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:51.656686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:51.682732  124886 cri.go:89] found id: ""
	I1008 14:51:51.682750  124886 logs.go:282] 0 containers: []
	W1008 14:51:51.682757  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:51.682766  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:51.682775  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:51.742589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:51.742612  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:51.771353  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:51.771369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:51.842948  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:51.842971  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:51.857862  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:51.857882  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:51.915551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:51.908356    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.908906    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910507    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.910926    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:51.912531    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.417267  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:54.428273  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:54.428333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:54.454016  124886 cri.go:89] found id: ""
	I1008 14:51:54.454030  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.454037  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:54.454042  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:54.454097  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:54.479088  124886 cri.go:89] found id: ""
	I1008 14:51:54.479104  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.479112  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:54.479117  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:54.479171  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:54.504383  124886 cri.go:89] found id: ""
	I1008 14:51:54.504401  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.504411  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:54.504418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:54.504481  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:54.530502  124886 cri.go:89] found id: ""
	I1008 14:51:54.530522  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.530529  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:54.530534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:54.530578  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:54.556899  124886 cri.go:89] found id: ""
	I1008 14:51:54.556920  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.556929  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:54.556935  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:54.556983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:54.582860  124886 cri.go:89] found id: ""
	I1008 14:51:54.582878  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.582888  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:54.582895  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:54.582954  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:54.609653  124886 cri.go:89] found id: ""
	I1008 14:51:54.609670  124886 logs.go:282] 0 containers: []
	W1008 14:51:54.609679  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:54.609689  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:54.609704  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:54.666095  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:54.658963    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.659578    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661163    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.661589    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:54.663106    8669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:54.666106  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:54.666116  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:54.725670  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:54.725693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:51:54.755377  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:54.755394  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:54.824839  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:54.824860  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.340378  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:51:57.351013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:51:57.351087  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:51:57.377174  124886 cri.go:89] found id: ""
	I1008 14:51:57.377192  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.377201  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:51:57.377208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:51:57.377259  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:51:57.403239  124886 cri.go:89] found id: ""
	I1008 14:51:57.403254  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.403261  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:51:57.403271  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:51:57.403317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:51:57.429149  124886 cri.go:89] found id: ""
	I1008 14:51:57.429168  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.429179  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:51:57.429185  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:51:57.429244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:51:57.454095  124886 cri.go:89] found id: ""
	I1008 14:51:57.454114  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.454128  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:51:57.454133  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:51:57.454187  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:51:57.479640  124886 cri.go:89] found id: ""
	I1008 14:51:57.479658  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.479665  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:51:57.479670  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:51:57.479725  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:51:57.505776  124886 cri.go:89] found id: ""
	I1008 14:51:57.505795  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.505805  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:51:57.505811  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:51:57.505853  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:51:57.531837  124886 cri.go:89] found id: ""
	I1008 14:51:57.531852  124886 logs.go:282] 0 containers: []
	W1008 14:51:57.531860  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:51:57.531867  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:51:57.531878  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:51:57.599522  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:51:57.599544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:51:57.614111  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:51:57.614132  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:51:57.671063  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:51:57.663985    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.664594    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666249    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.666734    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:51:57.668250    8805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:51:57.671074  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:51:57.671084  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:51:57.732027  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:51:57.732050  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:00.263338  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:00.274100  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:00.274167  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:00.299677  124886 cri.go:89] found id: ""
	I1008 14:52:00.299692  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.299698  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:00.299703  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:00.299744  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:00.325037  124886 cri.go:89] found id: ""
	I1008 14:52:00.325055  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.325065  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:00.325071  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:00.325128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:00.351372  124886 cri.go:89] found id: ""
	I1008 14:52:00.351388  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.351397  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:00.351402  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:00.351465  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:00.377746  124886 cri.go:89] found id: ""
	I1008 14:52:00.377761  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.377767  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:00.377772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:00.377838  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:00.403806  124886 cri.go:89] found id: ""
	I1008 14:52:00.403821  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.403827  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:00.403832  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:00.403888  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:00.431653  124886 cri.go:89] found id: ""
	I1008 14:52:00.431673  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.431682  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:00.431687  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:00.431732  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:00.458706  124886 cri.go:89] found id: ""
	I1008 14:52:00.458720  124886 logs.go:282] 0 containers: []
	W1008 14:52:00.458727  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:00.458735  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:00.458744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:00.527333  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:00.527355  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:00.545238  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:00.545260  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:00.604166  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:00.596114    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.596669    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.598250    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.599895    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:00.600370    8922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:00.604178  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:00.604190  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:00.667338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:00.667360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.196993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:03.207677  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:03.207730  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:03.232932  124886 cri.go:89] found id: ""
	I1008 14:52:03.232952  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.232963  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:03.232969  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:03.233019  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:03.257910  124886 cri.go:89] found id: ""
	I1008 14:52:03.257927  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.257934  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:03.257939  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:03.257989  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:03.282476  124886 cri.go:89] found id: ""
	I1008 14:52:03.282491  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.282498  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:03.282503  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:03.282556  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:03.307994  124886 cri.go:89] found id: ""
	I1008 14:52:03.308009  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.308016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:03.308020  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:03.308066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:03.333961  124886 cri.go:89] found id: ""
	I1008 14:52:03.333978  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.333985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:03.333990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:03.334036  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:03.360461  124886 cri.go:89] found id: ""
	I1008 14:52:03.360480  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.360491  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:03.360498  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:03.360546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:03.385935  124886 cri.go:89] found id: ""
	I1008 14:52:03.385951  124886 logs.go:282] 0 containers: []
	W1008 14:52:03.385958  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:03.385965  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:03.385980  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:03.399673  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:03.399689  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:03.456423  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:03.449295    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.449868    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451412    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.451841    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:03.453416    9037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:03.456433  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:03.456459  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:03.519728  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:03.519750  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:03.549347  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:03.549365  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.121403  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:06.132277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:06.132329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:06.158234  124886 cri.go:89] found id: ""
	I1008 14:52:06.158248  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.158255  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:06.158260  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:06.158308  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:06.184118  124886 cri.go:89] found id: ""
	I1008 14:52:06.184136  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.184145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:06.184151  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:06.184201  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:06.210586  124886 cri.go:89] found id: ""
	I1008 14:52:06.210604  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.210613  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:06.210619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:06.210682  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:06.236986  124886 cri.go:89] found id: ""
	I1008 14:52:06.237004  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.237013  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:06.237018  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:06.237064  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:06.264151  124886 cri.go:89] found id: ""
	I1008 14:52:06.264172  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.264182  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:06.264188  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:06.264240  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:06.290106  124886 cri.go:89] found id: ""
	I1008 14:52:06.290120  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.290126  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:06.290132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:06.290177  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:06.316419  124886 cri.go:89] found id: ""
	I1008 14:52:06.316435  124886 logs.go:282] 0 containers: []
	W1008 14:52:06.316453  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:06.316464  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:06.316477  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:06.377522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:06.377544  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:06.407056  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:06.407075  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:06.474318  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:06.474342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:06.488482  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:06.488502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:06.546904  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:06.539411    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.540087    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541297    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.541766    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:06.543482    9176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.048569  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:09.059380  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:09.059436  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:09.085888  124886 cri.go:89] found id: ""
	I1008 14:52:09.085906  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.085912  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:09.085918  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:09.085971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:09.113858  124886 cri.go:89] found id: ""
	I1008 14:52:09.113875  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.113882  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:09.113892  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:09.113939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:09.140388  124886 cri.go:89] found id: ""
	I1008 14:52:09.140407  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.140414  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:09.140420  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:09.140493  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:09.168003  124886 cri.go:89] found id: ""
	I1008 14:52:09.168018  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.168025  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:09.168030  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:09.168075  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:09.194655  124886 cri.go:89] found id: ""
	I1008 14:52:09.194681  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.194690  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:09.194696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:09.194757  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:09.221388  124886 cri.go:89] found id: ""
	I1008 14:52:09.221405  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.221411  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:09.221416  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:09.221490  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:09.247075  124886 cri.go:89] found id: ""
	I1008 14:52:09.247093  124886 logs.go:282] 0 containers: []
	W1008 14:52:09.247102  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:09.247122  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:09.247133  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:09.304638  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:09.297414    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.297989    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.299609    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.300155    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:09.301732    9289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:09.304650  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:09.304664  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:09.368718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:09.368742  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:09.399217  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:09.399239  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:09.468608  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:09.468629  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:11.984769  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:11.995534  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:11.995596  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:12.020218  124886 cri.go:89] found id: ""
	I1008 14:52:12.020234  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.020241  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:12.020247  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:12.020289  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:12.045959  124886 cri.go:89] found id: ""
	I1008 14:52:12.045978  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.045989  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:12.045996  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:12.046103  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:12.072101  124886 cri.go:89] found id: ""
	I1008 14:52:12.072118  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.072125  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:12.072129  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:12.072174  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:12.098793  124886 cri.go:89] found id: ""
	I1008 14:52:12.098808  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.098814  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:12.098819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:12.098871  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:12.124876  124886 cri.go:89] found id: ""
	I1008 14:52:12.124891  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.124900  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:12.124906  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:12.124973  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:12.151678  124886 cri.go:89] found id: ""
	I1008 14:52:12.151695  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.151703  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:12.151708  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:12.151764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:12.176969  124886 cri.go:89] found id: ""
	I1008 14:52:12.176986  124886 logs.go:282] 0 containers: []
	W1008 14:52:12.176994  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:12.177004  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:12.177019  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:12.247581  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:12.247604  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:12.262272  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:12.262290  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:12.319283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:12.312115    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.312741    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314399    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.314958    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:12.316558    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:12.319306  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:12.319318  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:12.383384  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:12.383406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:14.914713  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:14.925495  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:14.925548  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:14.951182  124886 cri.go:89] found id: ""
	I1008 14:52:14.951197  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.951205  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:14.951209  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:14.951265  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:14.978925  124886 cri.go:89] found id: ""
	I1008 14:52:14.978941  124886 logs.go:282] 0 containers: []
	W1008 14:52:14.978948  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:14.978953  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:14.979004  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:15.003964  124886 cri.go:89] found id: ""
	I1008 14:52:15.003983  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.003992  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:15.003997  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:15.004061  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:15.030077  124886 cri.go:89] found id: ""
	I1008 14:52:15.030095  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.030102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:15.030107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:15.030154  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:15.055689  124886 cri.go:89] found id: ""
	I1008 14:52:15.055704  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.055711  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:15.055715  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:15.055760  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:15.081174  124886 cri.go:89] found id: ""
	I1008 14:52:15.081191  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.081198  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:15.081203  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:15.081262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:15.107235  124886 cri.go:89] found id: ""
	I1008 14:52:15.107251  124886 logs.go:282] 0 containers: []
	W1008 14:52:15.107257  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:15.107265  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:15.107279  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:15.174130  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:15.174161  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:15.188435  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:15.188471  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:15.244706  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:15.237672    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.238174    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.239766    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.240213    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:15.241753    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:15.244720  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:15.244735  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:15.305071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:15.305098  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:17.835094  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:17.845787  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:17.845870  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:17.871734  124886 cri.go:89] found id: ""
	I1008 14:52:17.871749  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.871757  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:17.871764  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:17.871823  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:17.897412  124886 cri.go:89] found id: ""
	I1008 14:52:17.897433  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.897458  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:17.897467  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:17.897535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:17.925096  124886 cri.go:89] found id: ""
	I1008 14:52:17.925110  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.925117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:17.925122  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:17.925168  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:17.951272  124886 cri.go:89] found id: ""
	I1008 14:52:17.951289  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.951297  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:17.951301  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:17.951347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:17.976965  124886 cri.go:89] found id: ""
	I1008 14:52:17.976985  124886 logs.go:282] 0 containers: []
	W1008 14:52:17.976992  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:17.976998  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:17.977042  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:18.003041  124886 cri.go:89] found id: ""
	I1008 14:52:18.003057  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.003064  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:18.003069  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:18.003113  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:18.028732  124886 cri.go:89] found id: ""
	I1008 14:52:18.028748  124886 logs.go:282] 0 containers: []
	W1008 14:52:18.028756  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:18.028764  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:18.028774  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:18.092440  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:18.092467  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:18.121965  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:18.121984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:18.191653  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:18.191679  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:18.205820  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:18.205839  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:18.261002  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:18.254217    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.254744    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256369    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.256865    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:18.258401    9676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:20.762706  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:20.773592  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:20.773660  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:20.799324  124886 cri.go:89] found id: ""
	I1008 14:52:20.799340  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.799347  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:20.799352  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:20.799394  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:20.825415  124886 cri.go:89] found id: ""
	I1008 14:52:20.825430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.825436  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:20.825452  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:20.825504  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:20.851415  124886 cri.go:89] found id: ""
	I1008 14:52:20.851430  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.851437  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:20.851454  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:20.851503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:20.878438  124886 cri.go:89] found id: ""
	I1008 14:52:20.878476  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.878484  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:20.878489  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:20.878536  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:20.903857  124886 cri.go:89] found id: ""
	I1008 14:52:20.903873  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.903884  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:20.903890  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:20.903948  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:20.930746  124886 cri.go:89] found id: ""
	I1008 14:52:20.930763  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.930770  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:20.930791  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:20.930842  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:20.956487  124886 cri.go:89] found id: ""
	I1008 14:52:20.956504  124886 logs.go:282] 0 containers: []
	W1008 14:52:20.956510  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:20.956518  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:20.956528  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:21.026065  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:21.026087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:21.040112  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:21.040129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:21.095891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:21.088955    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.089519    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091035    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.091502    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:21.093077    9789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:21.095902  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:21.095914  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:21.159107  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:21.159129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:23.687668  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:23.698250  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:23.698317  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:23.723805  124886 cri.go:89] found id: ""
	I1008 14:52:23.723832  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.723842  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:23.723850  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:23.723900  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:23.749813  124886 cri.go:89] found id: ""
	I1008 14:52:23.749831  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.749840  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:23.749847  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:23.749918  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:23.774918  124886 cri.go:89] found id: ""
	I1008 14:52:23.774934  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.774940  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:23.774945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:23.774999  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:23.800898  124886 cri.go:89] found id: ""
	I1008 14:52:23.800918  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.800925  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:23.800930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:23.800978  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:23.827330  124886 cri.go:89] found id: ""
	I1008 14:52:23.827348  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.827356  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:23.827360  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:23.827405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:23.853485  124886 cri.go:89] found id: ""
	I1008 14:52:23.853503  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.853510  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:23.853515  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:23.853560  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:23.878936  124886 cri.go:89] found id: ""
	I1008 14:52:23.878957  124886 logs.go:282] 0 containers: []
	W1008 14:52:23.878967  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:23.878976  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:23.878994  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:23.934831  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:23.928203    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.928676    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930201    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.930630    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:23.932105    9906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:23.934841  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:23.934851  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:23.993858  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:23.993885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:24.022945  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:24.022962  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:24.092836  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:24.092865  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.608369  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:26.619983  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:26.620060  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:26.646593  124886 cri.go:89] found id: ""
	I1008 14:52:26.646611  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.646621  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:26.646627  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:26.646678  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:26.673294  124886 cri.go:89] found id: ""
	I1008 14:52:26.673310  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.673317  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:26.673324  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:26.673367  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:26.699235  124886 cri.go:89] found id: ""
	I1008 14:52:26.699251  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.699257  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:26.699262  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:26.699320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:26.724993  124886 cri.go:89] found id: ""
	I1008 14:52:26.725009  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.725016  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:26.725021  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:26.725074  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:26.749744  124886 cri.go:89] found id: ""
	I1008 14:52:26.749760  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.749767  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:26.749772  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:26.749821  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:26.775226  124886 cri.go:89] found id: ""
	I1008 14:52:26.775246  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.775255  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:26.775260  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:26.775316  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:26.805104  124886 cri.go:89] found id: ""
	I1008 14:52:26.805120  124886 logs.go:282] 0 containers: []
	W1008 14:52:26.805128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:26.805136  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:26.805152  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:26.834601  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:26.834618  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:26.900340  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:26.900361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:26.914389  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:26.914406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:26.969896  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:26.963095   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.963671   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965202   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.965598   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:26.967107   10049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:26.969911  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:26.969927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.531143  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:29.542884  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:29.542952  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:29.570323  124886 cri.go:89] found id: ""
	I1008 14:52:29.570339  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.570345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:29.570350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:29.570395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:29.596735  124886 cri.go:89] found id: ""
	I1008 14:52:29.596750  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.596756  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:29.596762  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:29.596811  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:29.622878  124886 cri.go:89] found id: ""
	I1008 14:52:29.622892  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.622898  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:29.622903  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:29.622950  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:29.648836  124886 cri.go:89] found id: ""
	I1008 14:52:29.648857  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.648880  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:29.648887  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:29.648939  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:29.674729  124886 cri.go:89] found id: ""
	I1008 14:52:29.674747  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.674753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:29.674758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:29.674802  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:29.700542  124886 cri.go:89] found id: ""
	I1008 14:52:29.700558  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.700565  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:29.700571  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:29.700615  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:29.726353  124886 cri.go:89] found id: ""
	I1008 14:52:29.726369  124886 logs.go:282] 0 containers: []
	W1008 14:52:29.726375  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:29.726383  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:29.726395  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:29.790538  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:29.790560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:29.805071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:29.805087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:29.861336  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:29.854341   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.854911   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.856502   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.857018   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:29.858531   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:29.861354  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:29.861367  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:29.921484  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:29.921507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.452001  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:32.462783  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:32.462839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:32.488895  124886 cri.go:89] found id: ""
	I1008 14:52:32.488913  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.488922  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:32.488929  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:32.488977  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:32.514655  124886 cri.go:89] found id: ""
	I1008 14:52:32.514674  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.514683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:32.514688  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:32.514739  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:32.542007  124886 cri.go:89] found id: ""
	I1008 14:52:32.542027  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.542037  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:32.542044  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:32.542100  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:32.569946  124886 cri.go:89] found id: ""
	I1008 14:52:32.569963  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.569970  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:32.569976  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:32.570022  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:32.595032  124886 cri.go:89] found id: ""
	I1008 14:52:32.595051  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.595061  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:32.595066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:32.595127  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:32.621883  124886 cri.go:89] found id: ""
	I1008 14:52:32.621903  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.621923  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:32.621930  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:32.621983  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:32.647589  124886 cri.go:89] found id: ""
	I1008 14:52:32.647606  124886 logs.go:282] 0 containers: []
	W1008 14:52:32.647612  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:32.647620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:32.647630  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:32.703098  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:32.696210   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.696781   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698345   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.698733   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:32.700308   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:32.703108  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:32.703129  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:32.766481  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:32.766502  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:32.794530  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:32.794546  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:32.864662  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:32.864687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.381050  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:35.391807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:35.391868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:35.418369  124886 cri.go:89] found id: ""
	I1008 14:52:35.418388  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.418397  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:35.418402  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:35.418467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:35.444660  124886 cri.go:89] found id: ""
	I1008 14:52:35.444676  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.444683  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:35.444687  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:35.444736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:35.471158  124886 cri.go:89] found id: ""
	I1008 14:52:35.471183  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.471190  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:35.471195  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:35.471238  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:35.496271  124886 cri.go:89] found id: ""
	I1008 14:52:35.496288  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.496295  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:35.496300  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:35.496345  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:35.521987  124886 cri.go:89] found id: ""
	I1008 14:52:35.522005  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.522015  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:35.522039  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:35.522098  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:35.547647  124886 cri.go:89] found id: ""
	I1008 14:52:35.547664  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.547673  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:35.547678  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:35.547723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:35.573056  124886 cri.go:89] found id: ""
	I1008 14:52:35.573075  124886 logs.go:282] 0 containers: []
	W1008 14:52:35.573085  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:35.573109  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:35.573123  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:35.640898  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:35.640923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:35.655247  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:35.655265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:35.712555  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:35.705487   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.706004   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.707597   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.708049   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:35.709652   10400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:35.712565  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:35.712575  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:35.772556  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:35.772579  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.301881  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:38.312627  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:38.312694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:38.337192  124886 cri.go:89] found id: ""
	I1008 14:52:38.337210  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.337220  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:38.337227  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:38.337278  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:38.361703  124886 cri.go:89] found id: ""
	I1008 14:52:38.361721  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.361730  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:38.361736  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:38.361786  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:38.387263  124886 cri.go:89] found id: ""
	I1008 14:52:38.387279  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.387286  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:38.387290  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:38.387334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:38.413808  124886 cri.go:89] found id: ""
	I1008 14:52:38.413824  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.413830  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:38.413835  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:38.413880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:38.440014  124886 cri.go:89] found id: ""
	I1008 14:52:38.440029  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.440036  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:38.440041  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:38.440085  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:38.466144  124886 cri.go:89] found id: ""
	I1008 14:52:38.466164  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.466174  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:38.466181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:38.466229  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:38.491536  124886 cri.go:89] found id: ""
	I1008 14:52:38.491554  124886 logs.go:282] 0 containers: []
	W1008 14:52:38.491563  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:38.491573  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:38.491584  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:38.520248  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:38.520265  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:38.588833  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:38.588861  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:38.603136  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:38.603155  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:38.659278  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:38.652318   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.652897   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654485   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.654894   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:38.656372   10539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:38.659290  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:38.659301  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.224716  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:41.235550  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:41.235600  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:41.261421  124886 cri.go:89] found id: ""
	I1008 14:52:41.261436  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.261455  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:41.261463  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:41.261516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:41.286798  124886 cri.go:89] found id: ""
	I1008 14:52:41.286813  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.286839  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:41.286844  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:41.286904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:41.312542  124886 cri.go:89] found id: ""
	I1008 14:52:41.312558  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.312567  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:41.312574  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:41.312623  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:41.339001  124886 cri.go:89] found id: ""
	I1008 14:52:41.339016  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.339022  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:41.339027  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:41.339073  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:41.365019  124886 cri.go:89] found id: ""
	I1008 14:52:41.365040  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.365049  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:41.365056  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:41.365115  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:41.389878  124886 cri.go:89] found id: ""
	I1008 14:52:41.389897  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.389904  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:41.389910  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:41.389960  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:41.415856  124886 cri.go:89] found id: ""
	I1008 14:52:41.415875  124886 logs.go:282] 0 containers: []
	W1008 14:52:41.415884  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:41.415895  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:41.415909  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:41.481175  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:41.481196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:41.495356  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:41.495373  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:41.552891  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:41.545696   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.546284   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.547871   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.548300   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:41.549833   10647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:41.552910  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:41.552927  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:41.615245  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:41.615282  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:44.146351  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:44.157234  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:44.157294  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:44.183016  124886 cri.go:89] found id: ""
	I1008 14:52:44.183032  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.183039  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:44.183044  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:44.183094  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:44.209452  124886 cri.go:89] found id: ""
	I1008 14:52:44.209471  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.209480  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:44.209487  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:44.209535  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:44.236057  124886 cri.go:89] found id: ""
	I1008 14:52:44.236079  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.236088  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:44.236094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:44.236165  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:44.262249  124886 cri.go:89] found id: ""
	I1008 14:52:44.262265  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.262274  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:44.262281  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:44.262333  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:44.288222  124886 cri.go:89] found id: ""
	I1008 14:52:44.288240  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.288249  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:44.288254  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:44.288303  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:44.312991  124886 cri.go:89] found id: ""
	I1008 14:52:44.313009  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.313017  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:44.313022  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:44.313066  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:44.338794  124886 cri.go:89] found id: ""
	I1008 14:52:44.338814  124886 logs.go:282] 0 containers: []
	W1008 14:52:44.338823  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:44.338835  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:44.338849  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:44.408632  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:44.408655  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:44.423360  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:44.423381  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:44.481035  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:44.474570   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.475077   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.476680   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.477129   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:44.478328   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:44.481052  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:44.481068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:44.545061  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:44.545093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.075772  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:47.086739  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:47.086782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:47.112465  124886 cri.go:89] found id: ""
	I1008 14:52:47.112483  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.112492  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:47.112497  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:47.112546  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:47.140124  124886 cri.go:89] found id: ""
	I1008 14:52:47.140139  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.140145  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:47.140150  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:47.140194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:47.167347  124886 cri.go:89] found id: ""
	I1008 14:52:47.167366  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.167376  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:47.167382  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:47.167428  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:47.193008  124886 cri.go:89] found id: ""
	I1008 14:52:47.193025  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.193032  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:47.193037  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:47.193081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:47.218907  124886 cri.go:89] found id: ""
	I1008 14:52:47.218922  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.218932  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:47.218938  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:47.218992  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:47.244390  124886 cri.go:89] found id: ""
	I1008 14:52:47.244406  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.244413  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:47.244418  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:47.244485  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:47.270432  124886 cri.go:89] found id: ""
	I1008 14:52:47.270460  124886 logs.go:282] 0 containers: []
	W1008 14:52:47.270473  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:47.270482  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:47.270496  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:47.284419  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:47.284434  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:47.340814  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:47.333908   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.334487   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336050   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.336514   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:47.338027   10894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:47.340829  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:47.340840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:47.405347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:47.405371  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:47.434675  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:47.434693  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:50.001509  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:50.012521  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:50.012580  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:50.038871  124886 cri.go:89] found id: ""
	I1008 14:52:50.038886  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.038895  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:50.038901  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:50.038945  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:50.065691  124886 cri.go:89] found id: ""
	I1008 14:52:50.065707  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.065713  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:50.065718  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:50.065764  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:50.091421  124886 cri.go:89] found id: ""
	I1008 14:52:50.091439  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.091459  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:50.091466  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:50.091516  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:50.117900  124886 cri.go:89] found id: ""
	I1008 14:52:50.117916  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.117922  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:50.117927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:50.117971  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:50.143795  124886 cri.go:89] found id: ""
	I1008 14:52:50.143811  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.143837  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:50.143842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:50.143889  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:50.170009  124886 cri.go:89] found id: ""
	I1008 14:52:50.170025  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.170032  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:50.170036  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:50.170081  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:50.195182  124886 cri.go:89] found id: ""
	I1008 14:52:50.195198  124886 logs.go:282] 0 containers: []
	W1008 14:52:50.195204  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:50.195213  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:50.195226  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:50.208906  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:50.208923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:50.263732  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:50.256835   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.257378   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.258907   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.259390   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:50.260896   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:50.263744  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:50.263754  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:50.321967  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:50.321990  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:50.350825  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:50.350843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:52.919243  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:52.929975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:52.930069  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:52.956423  124886 cri.go:89] found id: ""
	I1008 14:52:52.956439  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.956463  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:52.956470  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:52.956519  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:52.982128  124886 cri.go:89] found id: ""
	I1008 14:52:52.982143  124886 logs.go:282] 0 containers: []
	W1008 14:52:52.982150  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:52.982155  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:52.982204  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:53.008335  124886 cri.go:89] found id: ""
	I1008 14:52:53.008351  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.008358  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:53.008363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:53.008416  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:53.035683  124886 cri.go:89] found id: ""
	I1008 14:52:53.035698  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.035705  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:53.035710  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:53.035753  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:53.061482  124886 cri.go:89] found id: ""
	I1008 14:52:53.061590  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.061610  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:53.061619  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:53.061673  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:53.088358  124886 cri.go:89] found id: ""
	I1008 14:52:53.088375  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.088384  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:53.088390  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:53.088467  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:53.113970  124886 cri.go:89] found id: ""
	I1008 14:52:53.113988  124886 logs.go:282] 0 containers: []
	W1008 14:52:53.113995  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:53.114003  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:53.114016  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:53.181486  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:53.181511  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:53.195603  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:53.195620  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:53.251571  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:53.244694   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.245192   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.246826   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.247337   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:53.248852   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:53.251582  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:53.251592  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:53.312589  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:53.312610  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:55.843180  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:55.854192  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:55.854250  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:55.878967  124886 cri.go:89] found id: ""
	I1008 14:52:55.878984  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.878992  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:55.878997  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:55.879050  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:55.904136  124886 cri.go:89] found id: ""
	I1008 14:52:55.904151  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.904157  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:55.904174  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:55.904216  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:55.928319  124886 cri.go:89] found id: ""
	I1008 14:52:55.928337  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.928348  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:55.928353  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:55.928406  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:55.955314  124886 cri.go:89] found id: ""
	I1008 14:52:55.955330  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.955338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:55.955345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:55.955405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:55.980957  124886 cri.go:89] found id: ""
	I1008 14:52:55.980976  124886 logs.go:282] 0 containers: []
	W1008 14:52:55.980985  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:55.980992  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:55.981040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:56.006492  124886 cri.go:89] found id: ""
	I1008 14:52:56.006507  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.006514  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:56.006519  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:56.006566  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:56.032919  124886 cri.go:89] found id: ""
	I1008 14:52:56.032934  124886 logs.go:282] 0 containers: []
	W1008 14:52:56.032940  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:56.032948  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:56.032960  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:56.061693  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:56.061713  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:56.127262  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:56.127284  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:52:56.141728  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:56.141744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:56.197783  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:56.190143   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.190756   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193080   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.193609   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:56.195133   11284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:56.197799  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:56.197815  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:58.759309  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:52:58.770096  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:52:58.770150  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:52:58.796177  124886 cri.go:89] found id: ""
	I1008 14:52:58.796192  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.796199  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:52:58.796208  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:52:58.796260  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:52:58.821988  124886 cri.go:89] found id: ""
	I1008 14:52:58.822006  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.822013  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:52:58.822018  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:52:58.822068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:52:58.847935  124886 cri.go:89] found id: ""
	I1008 14:52:58.847953  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.847961  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:52:58.847966  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:52:58.848015  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:52:58.874796  124886 cri.go:89] found id: ""
	I1008 14:52:58.874814  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.874821  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:52:58.874826  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:52:58.874880  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:52:58.899925  124886 cri.go:89] found id: ""
	I1008 14:52:58.899941  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.899948  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:52:58.899953  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:52:58.900008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:52:58.926934  124886 cri.go:89] found id: ""
	I1008 14:52:58.926950  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.926958  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:52:58.926963  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:52:58.927006  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:52:58.953664  124886 cri.go:89] found id: ""
	I1008 14:52:58.953680  124886 logs.go:282] 0 containers: []
	W1008 14:52:58.953687  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:52:58.953694  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:52:58.953709  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:52:59.010616  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:52:59.003397   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.003936   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005527   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.005967   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:52:59.007532   11387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:52:59.010629  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:52:59.010640  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:52:59.071358  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:52:59.071382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:52:59.099863  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:52:59.099886  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:52:59.168071  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:52:59.168163  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.684667  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:01.695456  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:01.695524  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:01.721627  124886 cri.go:89] found id: ""
	I1008 14:53:01.721644  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.721652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:01.721656  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:01.721715  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:01.748495  124886 cri.go:89] found id: ""
	I1008 14:53:01.748512  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.748518  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:01.748523  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:01.748583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:01.774281  124886 cri.go:89] found id: ""
	I1008 14:53:01.774298  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.774310  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:01.774316  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:01.774377  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:01.800414  124886 cri.go:89] found id: ""
	I1008 14:53:01.800430  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.800437  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:01.800458  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:01.800513  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:01.825727  124886 cri.go:89] found id: ""
	I1008 14:53:01.825746  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.825753  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:01.825758  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:01.825804  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:01.852777  124886 cri.go:89] found id: ""
	I1008 14:53:01.852794  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.852802  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:01.852807  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:01.852855  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:01.879499  124886 cri.go:89] found id: ""
	I1008 14:53:01.879516  124886 logs.go:282] 0 containers: []
	W1008 14:53:01.879522  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:01.879530  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:01.879542  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:01.908367  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:01.908386  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:01.976337  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:01.976358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:01.990844  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:01.990863  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:02.047840  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:02.041007   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.041547   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043180   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.043634   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:02.045184   11528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:02.047852  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:02.047864  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.612824  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:04.623886  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:04.623937  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:04.650245  124886 cri.go:89] found id: ""
	I1008 14:53:04.650265  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.650274  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:04.650282  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:04.650338  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:04.675795  124886 cri.go:89] found id: ""
	I1008 14:53:04.675814  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.675849  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:04.675856  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:04.675910  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:04.701855  124886 cri.go:89] found id: ""
	I1008 14:53:04.701874  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.701883  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:04.701889  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:04.701951  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:04.727569  124886 cri.go:89] found id: ""
	I1008 14:53:04.727584  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.727590  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:04.727595  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:04.727637  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:04.753254  124886 cri.go:89] found id: ""
	I1008 14:53:04.753269  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.753276  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:04.753280  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:04.753329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:04.779529  124886 cri.go:89] found id: ""
	I1008 14:53:04.779548  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.779557  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:04.779564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:04.779611  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:04.806307  124886 cri.go:89] found id: ""
	I1008 14:53:04.806326  124886 logs.go:282] 0 containers: []
	W1008 14:53:04.806335  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:04.806346  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:04.806361  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:04.820357  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:04.820374  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:04.876718  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:04.869130   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.869702   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.871933   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.872407   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:04.873910   11633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:04.876732  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:04.876748  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:04.940387  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:04.940412  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:04.969994  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:04.970009  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.538422  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:07.550831  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:07.550884  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:07.577673  124886 cri.go:89] found id: ""
	I1008 14:53:07.577687  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.577693  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:07.577698  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:07.577750  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:07.603662  124886 cri.go:89] found id: ""
	I1008 14:53:07.603680  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.603695  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:07.603700  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:07.603746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:07.629802  124886 cri.go:89] found id: ""
	I1008 14:53:07.629821  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.629830  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:07.629834  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:07.629886  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:07.656081  124886 cri.go:89] found id: ""
	I1008 14:53:07.656096  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.656102  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:07.656107  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:07.656170  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:07.682162  124886 cri.go:89] found id: ""
	I1008 14:53:07.682177  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.682184  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:07.682189  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:07.682233  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:07.708617  124886 cri.go:89] found id: ""
	I1008 14:53:07.708635  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.708648  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:07.708653  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:07.708708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:07.734755  124886 cri.go:89] found id: ""
	I1008 14:53:07.734772  124886 logs.go:282] 0 containers: []
	W1008 14:53:07.734782  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:07.734793  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:07.734807  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:07.794522  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:07.794548  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:07.823563  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:07.823581  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:07.892786  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:07.892808  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:07.907262  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:07.907281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:07.962940  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:07.955713   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.956243   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.957928   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.958403   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:07.959967   11779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.464656  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:10.476746  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:10.476800  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:10.502937  124886 cri.go:89] found id: ""
	I1008 14:53:10.502958  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.502968  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:10.502974  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:10.503025  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:10.529780  124886 cri.go:89] found id: ""
	I1008 14:53:10.529796  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.529803  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:10.529807  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:10.529856  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:10.556092  124886 cri.go:89] found id: ""
	I1008 14:53:10.556108  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.556117  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:10.556124  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:10.556184  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:10.582264  124886 cri.go:89] found id: ""
	I1008 14:53:10.582281  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.582290  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:10.582296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:10.582354  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:10.608631  124886 cri.go:89] found id: ""
	I1008 14:53:10.608647  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.608655  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:10.608662  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:10.608721  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:10.635697  124886 cri.go:89] found id: ""
	I1008 14:53:10.635715  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.635725  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:10.635732  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:10.635793  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:10.661998  124886 cri.go:89] found id: ""
	I1008 14:53:10.662018  124886 logs.go:282] 0 containers: []
	W1008 14:53:10.662028  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:10.662040  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:10.662055  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:10.728096  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:10.728121  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:10.742521  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:10.742543  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:10.799551  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:10.792540   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.793167   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.794723   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.795142   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:10.796721   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:10.799566  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:10.799578  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:10.863614  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:10.863636  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.396084  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:13.407066  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:13.407128  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:13.433323  124886 cri.go:89] found id: ""
	I1008 14:53:13.433339  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.433345  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:13.433350  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:13.433393  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:13.460409  124886 cri.go:89] found id: ""
	I1008 14:53:13.460510  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.460522  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:13.460528  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:13.460589  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:13.487660  124886 cri.go:89] found id: ""
	I1008 14:53:13.487679  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.487689  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:13.487696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:13.487746  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:13.515522  124886 cri.go:89] found id: ""
	I1008 14:53:13.515538  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.515546  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:13.515551  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:13.515595  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:13.540751  124886 cri.go:89] found id: ""
	I1008 14:53:13.540767  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.540773  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:13.540778  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:13.540846  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:13.566812  124886 cri.go:89] found id: ""
	I1008 14:53:13.566829  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.566837  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:13.566842  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:13.566904  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:13.593236  124886 cri.go:89] found id: ""
	I1008 14:53:13.593255  124886 logs.go:282] 0 containers: []
	W1008 14:53:13.593262  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:13.593271  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:13.593281  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:13.657627  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:13.657651  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:13.686303  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:13.686320  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:13.755568  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:13.755591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:13.769800  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:13.769819  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:13.826318  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:13.819488   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.819989   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821505   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.821994   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:13.823432   12022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:16.327013  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:16.337840  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:16.337908  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:16.363203  124886 cri.go:89] found id: ""
	I1008 14:53:16.363221  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.363230  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:16.363235  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:16.363288  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:16.388535  124886 cri.go:89] found id: ""
	I1008 14:53:16.388551  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.388557  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:16.388563  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:16.388606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:16.414195  124886 cri.go:89] found id: ""
	I1008 14:53:16.414213  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.414221  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:16.414226  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:16.414274  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:16.440199  124886 cri.go:89] found id: ""
	I1008 14:53:16.440214  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.440221  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:16.440227  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:16.440283  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:16.465899  124886 cri.go:89] found id: ""
	I1008 14:53:16.465918  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.465925  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:16.465931  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:16.465976  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:16.491135  124886 cri.go:89] found id: ""
	I1008 14:53:16.491151  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.491157  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:16.491162  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:16.491205  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:16.517298  124886 cri.go:89] found id: ""
	I1008 14:53:16.517315  124886 logs.go:282] 0 containers: []
	W1008 14:53:16.517323  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:16.517331  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:16.517342  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:16.581777  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:16.581803  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:16.611824  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:16.611843  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:16.679935  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:16.679957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:16.694087  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:16.694103  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:16.750382  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:16.742855   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.743490   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745097   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.745503   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:16.747578   12149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:19.252068  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:19.262927  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:19.262980  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:19.288263  124886 cri.go:89] found id: ""
	I1008 14:53:19.288280  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.288286  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:19.288291  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:19.288334  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:19.314749  124886 cri.go:89] found id: ""
	I1008 14:53:19.314769  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.314776  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:19.314781  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:19.314833  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:19.343105  124886 cri.go:89] found id: ""
	I1008 14:53:19.343124  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.343132  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:19.343137  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:19.343194  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:19.369348  124886 cri.go:89] found id: ""
	I1008 14:53:19.369367  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.369376  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:19.369384  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:19.369438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:19.394541  124886 cri.go:89] found id: ""
	I1008 14:53:19.394556  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.394564  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:19.394569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:19.394617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:19.419883  124886 cri.go:89] found id: ""
	I1008 14:53:19.419900  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.419907  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:19.419911  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:19.419959  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:19.447316  124886 cri.go:89] found id: ""
	I1008 14:53:19.447332  124886 logs.go:282] 0 containers: []
	W1008 14:53:19.447339  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:19.447347  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:19.447360  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:19.509190  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:19.509213  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:19.538580  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:19.538601  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:19.610379  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:19.610406  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:19.625094  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:19.625115  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:19.682583  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:19.675969   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.676489   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.677982   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.678383   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:19.679743   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:22.184381  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:22.195435  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:22.195496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:22.222530  124886 cri.go:89] found id: ""
	I1008 14:53:22.222549  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.222559  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:22.222565  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:22.222631  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:22.249103  124886 cri.go:89] found id: ""
	I1008 14:53:22.249118  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.249125  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:22.249130  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:22.249185  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:22.275859  124886 cri.go:89] found id: ""
	I1008 14:53:22.275877  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.275886  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:22.275891  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:22.275944  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:22.301816  124886 cri.go:89] found id: ""
	I1008 14:53:22.301835  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.301845  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:22.301852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:22.301906  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:22.328795  124886 cri.go:89] found id: ""
	I1008 14:53:22.328810  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.328817  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:22.328821  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:22.328877  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:22.355119  124886 cri.go:89] found id: ""
	I1008 14:53:22.355134  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.355141  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:22.355146  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:22.355200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:22.382211  124886 cri.go:89] found id: ""
	I1008 14:53:22.382229  124886 logs.go:282] 0 containers: []
	W1008 14:53:22.382238  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:22.382248  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:22.382262  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:22.442814  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:22.442840  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:22.473721  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:22.473746  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:22.539788  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:22.539811  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:22.554277  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:22.554295  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:22.610102  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:22.603170   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.603723   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605241   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.605654   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:22.607233   12397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.110358  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:25.121359  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:25.121409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:25.146726  124886 cri.go:89] found id: ""
	I1008 14:53:25.146741  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.146747  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:25.146752  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:25.146797  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:25.173762  124886 cri.go:89] found id: ""
	I1008 14:53:25.173780  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.173788  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:25.173792  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:25.173839  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:25.200613  124886 cri.go:89] found id: ""
	I1008 14:53:25.200630  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.200636  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:25.200641  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:25.200686  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:25.227307  124886 cri.go:89] found id: ""
	I1008 14:53:25.227327  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.227338  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:25.227345  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:25.227395  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:25.253257  124886 cri.go:89] found id: ""
	I1008 14:53:25.253272  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.253278  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:25.253283  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:25.253329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:25.281060  124886 cri.go:89] found id: ""
	I1008 14:53:25.281077  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.281089  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:25.281094  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:25.281140  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:25.306651  124886 cri.go:89] found id: ""
	I1008 14:53:25.306668  124886 logs.go:282] 0 containers: []
	W1008 14:53:25.306678  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:25.306688  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:25.306699  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:25.373410  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:25.373433  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:25.388282  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:25.388304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:25.445863  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:25.438591   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.439162   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.440786   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.441366   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:25.442964   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:25.445874  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:25.445885  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:25.510564  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:25.510590  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.041417  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:28.052378  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:28.052432  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:28.078711  124886 cri.go:89] found id: ""
	I1008 14:53:28.078728  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.078734  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:28.078740  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:28.078782  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:28.105010  124886 cri.go:89] found id: ""
	I1008 14:53:28.105025  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.105031  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:28.105036  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:28.105088  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:28.131983  124886 cri.go:89] found id: ""
	I1008 14:53:28.132001  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.132011  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:28.132017  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:28.132076  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:28.159135  124886 cri.go:89] found id: ""
	I1008 14:53:28.159153  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.159160  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:28.159166  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:28.159212  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:28.187793  124886 cri.go:89] found id: ""
	I1008 14:53:28.187811  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.187821  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:28.187827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:28.187872  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:28.214232  124886 cri.go:89] found id: ""
	I1008 14:53:28.214251  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.214265  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:28.214272  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:28.214335  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:28.240649  124886 cri.go:89] found id: ""
	I1008 14:53:28.240663  124886 logs.go:282] 0 containers: []
	W1008 14:53:28.240669  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:28.240677  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:28.240687  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:28.304071  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:28.304094  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:28.333331  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:28.333346  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:28.401896  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:28.401919  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:28.416514  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:28.416531  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:28.472271  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:28.465592   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.466140   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.467705   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.468112   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:28.469634   12647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:30.972553  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:30.983612  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:30.983666  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:31.011336  124886 cri.go:89] found id: ""
	I1008 14:53:31.011350  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.011357  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:31.011362  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:31.011405  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:31.036913  124886 cri.go:89] found id: ""
	I1008 14:53:31.036935  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.036944  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:31.036948  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:31.037003  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:31.063500  124886 cri.go:89] found id: ""
	I1008 14:53:31.063516  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.063523  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:31.063527  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:31.063582  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:31.091035  124886 cri.go:89] found id: ""
	I1008 14:53:31.091057  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.091066  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:31.091073  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:31.091123  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:31.117295  124886 cri.go:89] found id: ""
	I1008 14:53:31.117310  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.117317  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:31.117322  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:31.117372  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:31.143795  124886 cri.go:89] found id: ""
	I1008 14:53:31.143810  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.143815  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:31.143820  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:31.143863  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:31.170134  124886 cri.go:89] found id: ""
	I1008 14:53:31.170150  124886 logs.go:282] 0 containers: []
	W1008 14:53:31.170157  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:31.170164  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:31.170174  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:31.241300  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:31.241324  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:31.255637  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:31.255656  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:31.312716  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:31.305600   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.306255   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.307813   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.308214   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:31.309824   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:31.312725  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:31.312736  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:31.377091  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:31.377114  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:33.907080  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:33.918207  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:33.918262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:33.944092  124886 cri.go:89] found id: ""
	I1008 14:53:33.944111  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.944122  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:33.944129  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:33.944192  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:33.970271  124886 cri.go:89] found id: ""
	I1008 14:53:33.970286  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.970293  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:33.970298  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:33.970347  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:33.996407  124886 cri.go:89] found id: ""
	I1008 14:53:33.996421  124886 logs.go:282] 0 containers: []
	W1008 14:53:33.996427  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:33.996433  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:33.996503  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:34.023513  124886 cri.go:89] found id: ""
	I1008 14:53:34.023533  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.023542  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:34.023549  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:34.023606  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:34.050777  124886 cri.go:89] found id: ""
	I1008 14:53:34.050797  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.050807  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:34.050813  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:34.050868  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:34.077691  124886 cri.go:89] found id: ""
	I1008 14:53:34.077710  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.077719  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:34.077724  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:34.077769  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:34.104354  124886 cri.go:89] found id: ""
	I1008 14:53:34.104373  124886 logs.go:282] 0 containers: []
	W1008 14:53:34.104380  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:34.104388  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:34.104404  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:34.171873  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:34.171899  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:34.185891  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:34.185908  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:34.243162  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:34.235900   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.236490   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238035   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.238581   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:34.240243   12881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:34.243172  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:34.243185  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:34.306766  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:34.306791  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:36.836905  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:36.848013  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:36.848068  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:36.873912  124886 cri.go:89] found id: ""
	I1008 14:53:36.873930  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.873938  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:36.873944  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:36.873994  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:36.899859  124886 cri.go:89] found id: ""
	I1008 14:53:36.899875  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.899881  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:36.899886  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:36.899930  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:36.926292  124886 cri.go:89] found id: ""
	I1008 14:53:36.926314  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.926321  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:36.926326  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:36.926370  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:36.952172  124886 cri.go:89] found id: ""
	I1008 14:53:36.952189  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.952196  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:36.952201  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:36.952248  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:36.978525  124886 cri.go:89] found id: ""
	I1008 14:53:36.978542  124886 logs.go:282] 0 containers: []
	W1008 14:53:36.978548  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:36.978553  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:36.978605  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:37.005955  124886 cri.go:89] found id: ""
	I1008 14:53:37.005973  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.005984  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:37.005990  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:37.006037  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:37.032282  124886 cri.go:89] found id: ""
	I1008 14:53:37.032300  124886 logs.go:282] 0 containers: []
	W1008 14:53:37.032310  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:37.032320  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:37.032336  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:37.100471  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:37.100507  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:37.114707  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:37.114727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:37.173117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:37.165245   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.166926   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.167346   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.168902   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:37.169367   12995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:37.173128  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:37.173138  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:37.237613  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:37.237637  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:39.769167  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:39.780181  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:39.780239  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:39.805900  124886 cri.go:89] found id: ""
	I1008 14:53:39.805921  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.805928  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:39.805935  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:39.805982  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:39.832463  124886 cri.go:89] found id: ""
	I1008 14:53:39.832485  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.832493  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:39.832501  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:39.832565  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:39.859105  124886 cri.go:89] found id: ""
	I1008 14:53:39.859120  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.859127  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:39.859132  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:39.859176  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:39.885372  124886 cri.go:89] found id: ""
	I1008 14:53:39.885395  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.885402  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:39.885410  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:39.885476  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:39.911669  124886 cri.go:89] found id: ""
	I1008 14:53:39.911684  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.911691  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:39.911696  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:39.911743  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:39.939236  124886 cri.go:89] found id: ""
	I1008 14:53:39.939254  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.939263  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:39.939269  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:39.939329  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:39.967816  124886 cri.go:89] found id: ""
	I1008 14:53:39.967833  124886 logs.go:282] 0 containers: []
	W1008 14:53:39.967839  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:39.967847  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:39.967859  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:39.982071  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:39.982090  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:40.038524  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:40.031711   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.032256   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.033910   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.034374   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:40.035931   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:40.038545  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:40.038560  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:40.099347  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:40.099369  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:40.128637  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:40.128654  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.700345  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:42.711170  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:42.711224  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:42.738404  124886 cri.go:89] found id: ""
	I1008 14:53:42.738420  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.738426  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:42.738431  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:42.738496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:42.765170  124886 cri.go:89] found id: ""
	I1008 14:53:42.765185  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.765192  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:42.765196  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:42.765244  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:42.790844  124886 cri.go:89] found id: ""
	I1008 14:53:42.790862  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.790870  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:42.790876  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:42.790920  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:42.817749  124886 cri.go:89] found id: ""
	I1008 14:53:42.817765  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.817772  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:42.817777  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:42.817826  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:42.844796  124886 cri.go:89] found id: ""
	I1008 14:53:42.844815  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.844823  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:42.844827  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:42.844882  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:42.870976  124886 cri.go:89] found id: ""
	I1008 14:53:42.870993  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.871001  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:42.871006  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:42.871051  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:42.897679  124886 cri.go:89] found id: ""
	I1008 14:53:42.897698  124886 logs.go:282] 0 containers: []
	W1008 14:53:42.897707  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:42.897716  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:42.897727  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:42.967720  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:42.967744  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:42.981967  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:42.981984  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:43.039728  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:43.032751   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.033351   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.034967   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.035421   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:43.036955   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:43.039742  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:43.039753  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:43.101886  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:43.101911  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:45.635598  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:45.646564  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:45.646617  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:45.673775  124886 cri.go:89] found id: ""
	I1008 14:53:45.673791  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.673797  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:45.673802  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:45.673845  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:45.700610  124886 cri.go:89] found id: ""
	I1008 14:53:45.700627  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.700633  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:45.700638  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:45.700694  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:45.726636  124886 cri.go:89] found id: ""
	I1008 14:53:45.726653  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.726662  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:45.726669  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:45.726723  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:45.753352  124886 cri.go:89] found id: ""
	I1008 14:53:45.753367  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.753374  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:45.753379  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:45.753434  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:45.780250  124886 cri.go:89] found id: ""
	I1008 14:53:45.780266  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.780272  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:45.780277  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:45.780326  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:45.805847  124886 cri.go:89] found id: ""
	I1008 14:53:45.805863  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.805870  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:45.805875  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:45.805940  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:45.832274  124886 cri.go:89] found id: ""
	I1008 14:53:45.832290  124886 logs.go:282] 0 containers: []
	W1008 14:53:45.832297  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:45.832304  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:45.832315  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:45.901895  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:45.901925  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:45.916420  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:45.916438  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:45.972937  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:45.965933   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.966506   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968007   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.968480   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:45.970006   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:45.972948  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:45.972958  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:46.034817  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:46.034841  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.564993  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:48.576052  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:48.576102  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:48.602007  124886 cri.go:89] found id: ""
	I1008 14:53:48.602024  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.602031  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:48.602035  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:48.602080  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:48.628143  124886 cri.go:89] found id: ""
	I1008 14:53:48.628160  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.628168  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:48.628173  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:48.628218  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:48.655880  124886 cri.go:89] found id: ""
	I1008 14:53:48.655898  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.655907  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:48.655913  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:48.655958  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:48.683255  124886 cri.go:89] found id: ""
	I1008 14:53:48.683270  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.683278  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:48.683284  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:48.683337  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:48.709473  124886 cri.go:89] found id: ""
	I1008 14:53:48.709492  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.709501  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:48.709508  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:48.709567  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:48.736246  124886 cri.go:89] found id: ""
	I1008 14:53:48.736268  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.736274  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:48.736279  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:48.736327  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:48.763463  124886 cri.go:89] found id: ""
	I1008 14:53:48.763483  124886 logs.go:282] 0 containers: []
	W1008 14:53:48.763493  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:48.763503  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:48.763518  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:48.792359  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:48.792378  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:48.859056  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:48.859077  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:48.873385  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:48.873405  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:48.931065  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:48.923038   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.923545   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.925972   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.926427   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:48.928025   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:48.931075  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:48.931087  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:51.494941  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:51.505819  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:51.505869  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:51.533622  124886 cri.go:89] found id: ""
	I1008 14:53:51.533643  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.533652  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:51.533659  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:51.533707  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:51.560499  124886 cri.go:89] found id: ""
	I1008 14:53:51.560519  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.560528  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:51.560536  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:51.560584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:51.587541  124886 cri.go:89] found id: ""
	I1008 14:53:51.587556  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.587564  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:51.587569  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:51.587616  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:51.614266  124886 cri.go:89] found id: ""
	I1008 14:53:51.614284  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.614291  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:51.614296  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:51.614343  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:51.639614  124886 cri.go:89] found id: ""
	I1008 14:53:51.639632  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.639641  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:51.639649  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:51.639708  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:51.667306  124886 cri.go:89] found id: ""
	I1008 14:53:51.667322  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.667328  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:51.667333  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:51.667375  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:51.692160  124886 cri.go:89] found id: ""
	I1008 14:53:51.692175  124886 logs.go:282] 0 containers: []
	W1008 14:53:51.692182  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:51.692191  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:51.692204  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:51.720341  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:51.720358  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:51.785600  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:51.785622  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:51.800298  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:51.800317  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:51.857283  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:51.849986   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.850568   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852145   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.852657   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:51.854222   13635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:51.857293  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:51.857304  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:54.424673  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:54.435975  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:54.436023  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:54.462429  124886 cri.go:89] found id: ""
	I1008 14:53:54.462462  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.462472  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:54.462479  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:54.462528  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:54.489261  124886 cri.go:89] found id: ""
	I1008 14:53:54.489276  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.489284  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:54.489289  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:54.489344  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:54.514962  124886 cri.go:89] found id: ""
	I1008 14:53:54.514980  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.514990  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:54.514996  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:54.515040  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:54.541414  124886 cri.go:89] found id: ""
	I1008 14:53:54.541428  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.541435  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:54.541439  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:54.541501  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:54.567913  124886 cri.go:89] found id: ""
	I1008 14:53:54.567931  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.567940  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:54.567945  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:54.568008  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:54.594492  124886 cri.go:89] found id: ""
	I1008 14:53:54.594511  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.594522  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:54.594528  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:54.594583  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:54.621305  124886 cri.go:89] found id: ""
	I1008 14:53:54.621321  124886 logs.go:282] 0 containers: []
	W1008 14:53:54.621330  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:54.621338  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:54.621348  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:53:54.648627  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:54.648645  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:54.717360  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:54.717382  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:54.731905  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:54.731923  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:54.788630  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:54.781289   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.782033   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.783636   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.784192   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:54.785831   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:54.788640  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:54.788650  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.353718  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:53:57.365518  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:53:57.365570  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:53:57.391621  124886 cri.go:89] found id: ""
	I1008 14:53:57.391638  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.391646  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:53:57.391650  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:53:57.391704  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:53:57.419557  124886 cri.go:89] found id: ""
	I1008 14:53:57.419574  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.419582  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:53:57.419587  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:53:57.419643  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:53:57.447029  124886 cri.go:89] found id: ""
	I1008 14:53:57.447047  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.447059  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:53:57.447077  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:53:57.447126  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:53:57.473391  124886 cri.go:89] found id: ""
	I1008 14:53:57.473410  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.473418  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:53:57.473423  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:53:57.473494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:53:57.499437  124886 cri.go:89] found id: ""
	I1008 14:53:57.499472  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.499481  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:53:57.499486  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:53:57.499542  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:53:57.525753  124886 cri.go:89] found id: ""
	I1008 14:53:57.525770  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.525776  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:53:57.525782  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:53:57.525827  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:53:57.555506  124886 cri.go:89] found id: ""
	I1008 14:53:57.555523  124886 logs.go:282] 0 containers: []
	W1008 14:53:57.555529  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:53:57.555539  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:53:57.555553  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:53:57.623045  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:53:57.623068  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:53:57.637620  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:53:57.637638  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:53:57.695326  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:53:57.688185   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.688709   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690285   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.690694   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:53:57.692275   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:53:57.695339  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:53:57.695356  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:53:57.755685  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:53:57.755710  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:00.285648  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:00.296554  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:00.296603  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:00.322379  124886 cri.go:89] found id: ""
	I1008 14:54:00.322396  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.322405  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:00.322409  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:00.322474  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:00.349397  124886 cri.go:89] found id: ""
	I1008 14:54:00.349414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.349423  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:00.349429  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:00.349507  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:00.375588  124886 cri.go:89] found id: ""
	I1008 14:54:00.375602  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.375608  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:00.375613  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:00.375670  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:00.401398  124886 cri.go:89] found id: ""
	I1008 14:54:00.401414  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.401420  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:00.401426  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:00.401494  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:00.427652  124886 cri.go:89] found id: ""
	I1008 14:54:00.427668  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.427675  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:00.427680  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:00.427736  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:00.451896  124886 cri.go:89] found id: ""
	I1008 14:54:00.451911  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.451918  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:00.451923  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:00.451967  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:00.478107  124886 cri.go:89] found id: ""
	I1008 14:54:00.478122  124886 logs.go:282] 0 containers: []
	W1008 14:54:00.478128  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:00.478135  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:00.478145  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:00.547950  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:00.547974  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:00.561968  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:00.561986  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:00.618117  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:00.611087   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.611597   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613202   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.613594   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:00.615184   13993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:00.618131  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:00.618141  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:00.683464  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:00.683490  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.211808  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:03.222618  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:03.222667  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:03.248716  124886 cri.go:89] found id: ""
	I1008 14:54:03.248732  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.248738  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:03.248742  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:03.248784  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:03.275183  124886 cri.go:89] found id: ""
	I1008 14:54:03.275202  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.275209  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:03.275214  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:03.275262  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:03.301882  124886 cri.go:89] found id: ""
	I1008 14:54:03.301909  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.301915  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:03.301920  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:03.301966  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:03.328783  124886 cri.go:89] found id: ""
	I1008 14:54:03.328799  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.328811  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:03.328817  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:03.328864  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:03.355235  124886 cri.go:89] found id: ""
	I1008 14:54:03.355251  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.355259  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:03.355268  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:03.355313  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:03.382286  124886 cri.go:89] found id: ""
	I1008 14:54:03.382305  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.382313  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:03.382318  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:03.382371  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:03.408682  124886 cri.go:89] found id: ""
	I1008 14:54:03.408700  124886 logs.go:282] 0 containers: []
	W1008 14:54:03.408708  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:03.408718  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:03.408732  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:03.438177  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:03.438196  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:03.507859  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:03.507881  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:03.523723  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:03.523747  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:03.580407  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:03.573472   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.574038   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.575548   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.576072   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:03.577666   14130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:03.580418  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:03.580430  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.142863  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:06.153852  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 14:54:06.153912  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 14:54:06.180234  124886 cri.go:89] found id: ""
	I1008 14:54:06.180253  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.180264  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 14:54:06.180271  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 14:54:06.180320  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 14:54:06.207080  124886 cri.go:89] found id: ""
	I1008 14:54:06.207094  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.207101  124886 logs.go:284] No container was found matching "etcd"
	I1008 14:54:06.207106  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 14:54:06.207152  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 14:54:06.232369  124886 cri.go:89] found id: ""
	I1008 14:54:06.232384  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.232390  124886 logs.go:284] No container was found matching "coredns"
	I1008 14:54:06.232394  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 14:54:06.232438  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 14:54:06.257360  124886 cri.go:89] found id: ""
	I1008 14:54:06.257376  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.257383  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 14:54:06.257388  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 14:54:06.257433  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 14:54:06.284487  124886 cri.go:89] found id: ""
	I1008 14:54:06.284507  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.284516  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 14:54:06.284523  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 14:54:06.284584  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 14:54:06.310846  124886 cri.go:89] found id: ""
	I1008 14:54:06.310863  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.310874  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 14:54:06.310882  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 14:54:06.310935  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 14:54:06.337095  124886 cri.go:89] found id: ""
	I1008 14:54:06.337114  124886 logs.go:282] 0 containers: []
	W1008 14:54:06.337121  124886 logs.go:284] No container was found matching "kindnet"
	I1008 14:54:06.337130  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 14:54:06.337142  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 14:54:06.406561  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 14:54:06.406591  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 14:54:06.421066  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 14:54:06.421088  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 14:54:06.477926  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 14:54:06.469780   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471457   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.471898   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473452   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 14:54:06.473816   14239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 14:54:06.477943  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 14:54:06.477957  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 14:54:06.538516  124886 logs.go:123] Gathering logs for container status ...
	I1008 14:54:06.538537  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 14:54:09.071758  124886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:54:09.082621  124886 kubeadm.go:601] duration metric: took 4m3.01446136s to restartPrimaryControlPlane
	W1008 14:54:09.082718  124886 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 14:54:09.082774  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:54:09.534098  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:54:09.546894  124886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:54:09.555065  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:54:09.555116  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:54:09.563122  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:54:09.563134  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:54:09.563181  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:54:09.571418  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:54:09.571492  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:54:09.579061  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:54:09.587199  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:54:09.587244  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:54:09.594420  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.602223  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:54:09.602263  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:54:09.609598  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:54:09.616978  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:54:09.617035  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:54:09.624225  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:54:09.679083  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:54:09.736432  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:58:12.118648  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 14:58:12.118737  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 14:58:12.121564  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:58:12.121611  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:58:12.121691  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 14:58:12.121739  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 14:58:12.121768  124886 kubeadm.go:318] OS: Linux
	I1008 14:58:12.121805  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 14:58:12.121846  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 14:58:12.121885  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 14:58:12.121936  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 14:58:12.121975  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 14:58:12.122056  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 14:58:12.122130  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 14:58:12.122194  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 14:58:12.122280  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:58:12.122381  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:58:12.122523  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:58:12.122608  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:58:12.124721  124886 out.go:252]   - Generating certificates and keys ...
	I1008 14:58:12.124815  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:58:12.124880  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:58:12.124964  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 14:58:12.125031  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 14:58:12.125148  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 14:58:12.125193  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 14:58:12.125282  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 14:58:12.125362  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 14:58:12.125490  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 14:58:12.125594  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 14:58:12.125626  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 14:58:12.125673  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:58:12.125714  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:58:12.125760  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:58:12.125802  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:58:12.125857  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:58:12.125902  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:58:12.125971  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:58:12.126032  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:58:12.128152  124886 out.go:252]   - Booting up control plane ...
	I1008 14:58:12.128237  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:58:12.128300  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:58:12.128371  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:58:12.128508  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:58:12.128583  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:58:12.128689  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:58:12.128762  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:58:12.128794  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:58:12.128904  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:58:12.128993  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:58:12.129038  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.0016053s
	I1008 14:58:12.129115  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:58:12.129187  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 14:58:12.129304  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:58:12.129408  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:58:12.129490  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	I1008 14:58:12.129546  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	I1008 14:58:12.129607  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	I1008 14:58:12.129609  124886 kubeadm.go:318] 
	I1008 14:58:12.129696  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 14:58:12.129765  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 14:58:12.129857  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 14:58:12.129935  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 14:58:12.129999  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 14:58:12.130073  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 14:58:12.130125  124886 kubeadm.go:318] 
	W1008 14:58:12.130230  124886 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.0016053s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651418s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000657435s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000893578s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 14:58:12.130328  124886 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 14:58:12.582965  124886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:58:12.596265  124886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 14:58:12.596310  124886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:58:12.604829  124886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:58:12.604840  124886 kubeadm.go:157] found existing configuration files:
	
	I1008 14:58:12.604880  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 14:58:12.613146  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:58:12.613253  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:58:12.621163  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 14:58:12.629390  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:58:12.629433  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:58:12.637274  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.645831  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:58:12.645886  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:58:12.653972  124886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 14:58:12.662348  124886 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:58:12.662392  124886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:58:12.670230  124886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 14:58:12.730328  124886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 14:58:12.789898  124886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:02:14.463875  124886 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 15:02:14.464082  124886 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:02:14.466966  124886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:02:14.467026  124886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:02:14.467112  124886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:02:14.467156  124886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:02:14.467184  124886 kubeadm.go:318] OS: Linux
	I1008 15:02:14.467232  124886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:02:14.467270  124886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:02:14.467309  124886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:02:14.467348  124886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:02:14.467386  124886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:02:14.467424  124886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:02:14.467494  124886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:02:14.467536  124886 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:02:14.467596  124886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:02:14.467693  124886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:02:14.467779  124886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:02:14.467827  124886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:02:14.470599  124886 out.go:252]   - Generating certificates and keys ...
	I1008 15:02:14.470674  124886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:02:14.470757  124886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:02:14.470867  124886 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:02:14.470954  124886 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:02:14.471017  124886 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:02:14.471091  124886 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:02:14.471148  124886 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:02:14.471198  124886 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:02:14.471289  124886 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:02:14.471353  124886 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:02:14.471382  124886 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:02:14.471424  124886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:02:14.471487  124886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:02:14.471529  124886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:02:14.471569  124886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:02:14.471615  124886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:02:14.471657  124886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:02:14.471734  124886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:02:14.471802  124886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:02:14.473075  124886 out.go:252]   - Booting up control plane ...
	I1008 15:02:14.473133  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:02:14.473209  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:02:14.473257  124886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:02:14.473356  124886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:02:14.473436  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:02:14.473538  124886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:02:14.473606  124886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:02:14.473637  124886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:02:14.473747  124886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:02:14.473833  124886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:02:14.473877  124886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.93866ms
	I1008 15:02:14.473950  124886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:02:14.474013  124886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1008 15:02:14.474094  124886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:02:14.474159  124886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:02:14.474228  124886 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	I1008 15:02:14.474292  124886 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	I1008 15:02:14.474371  124886 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	I1008 15:02:14.474380  124886 kubeadm.go:318] 
	I1008 15:02:14.474476  124886 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:02:14.474542  124886 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:02:14.474617  124886 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:02:14.474713  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:02:14.474773  124886 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:02:14.474854  124886 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:02:14.474900  124886 kubeadm.go:318] 
	I1008 15:02:14.474937  124886 kubeadm.go:402] duration metric: took 12m8.444330692s to StartCluster
	I1008 15:02:14.474986  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:02:14.475048  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:02:14.503050  124886 cri.go:89] found id: ""
	I1008 15:02:14.503067  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.503076  124886 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:02:14.503082  124886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:02:14.503136  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:02:14.530120  124886 cri.go:89] found id: ""
	I1008 15:02:14.530138  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.530145  124886 logs.go:284] No container was found matching "etcd"
	I1008 15:02:14.530149  124886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:02:14.530200  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:02:14.555892  124886 cri.go:89] found id: ""
	I1008 15:02:14.555909  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.555916  124886 logs.go:284] No container was found matching "coredns"
	I1008 15:02:14.555921  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:02:14.555972  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:02:14.583336  124886 cri.go:89] found id: ""
	I1008 15:02:14.583351  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.583358  124886 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:02:14.583363  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:02:14.583409  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:02:14.611139  124886 cri.go:89] found id: ""
	I1008 15:02:14.611160  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.611169  124886 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:02:14.611175  124886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:02:14.611227  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:02:14.639405  124886 cri.go:89] found id: ""
	I1008 15:02:14.639422  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.639429  124886 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:02:14.639434  124886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:02:14.639496  124886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:02:14.666049  124886 cri.go:89] found id: ""
	I1008 15:02:14.666066  124886 logs.go:282] 0 containers: []
	W1008 15:02:14.666073  124886 logs.go:284] No container was found matching "kindnet"
	I1008 15:02:14.666082  124886 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:02:14.666093  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:02:14.729847  124886 logs.go:123] Gathering logs for container status ...
	I1008 15:02:14.729877  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 15:02:14.760743  124886 logs.go:123] Gathering logs for kubelet ...
	I1008 15:02:14.760761  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:02:14.827532  124886 logs.go:123] Gathering logs for dmesg ...
	I1008 15:02:14.827555  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:02:14.842256  124886 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:02:14.842273  124886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:02:14.900360  124886 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:02:14.893119   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.893629   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895213   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.895681   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:14.897261   15598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1008 15:02:14.900380  124886 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:02:14.900418  124886 out.go:285] * 
	W1008 15:02:14.900560  124886 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.900582  124886 out.go:285] * 
	W1008 15:02:14.902936  124886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:02:14.906609  124886 out.go:203] 
	W1008 15:02:14.908139  124886 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.93866ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000136115s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000235916s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000345155s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:02:14.908172  124886 out.go:285] * 
	I1008 15:02:14.910356  124886 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.237047841Z" level=info msg="createCtr: removing container 2613a0d4a3380b900751d682e8322989397f568ab578ee7ffa4f599a27aa571c" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.23711764Z" level=info msg="createCtr: deleting container 2613a0d4a3380b900751d682e8322989397f568ab578ee7ffa4f599a27aa571c from storage" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:18 functional-367186 crio[5841]: time="2025-10-08T15:02:18.240485039Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-367186_kube-system_79ef8396c9b4453448760b569bb6e391_0" id=a8af71c0-37ec-4a65-9a3d-7c7196db04f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.213272544Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=00d470f5-1305-4765-8607-5193dfc5fa4f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.214483028Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=03fbf2f6-f103-4a98-ace2-6261964e548e name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.215780994Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-367186/kube-apiserver" id=c3acec42-19b7-408a-a0fb-f7385aa34bea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.216282948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.221927127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.222633774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.240918002Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c3acec42-19b7-408a-a0fb-f7385aa34bea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.243542117Z" level=info msg="createCtr: deleting container ID 3eb8462cdd91fe01eba72ae1fe6861b427a22e562fb01825c0ccaeda01051ae3 from idIndex" id=c3acec42-19b7-408a-a0fb-f7385aa34bea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.243602653Z" level=info msg="createCtr: removing container 3eb8462cdd91fe01eba72ae1fe6861b427a22e562fb01825c0ccaeda01051ae3" id=c3acec42-19b7-408a-a0fb-f7385aa34bea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.243662132Z" level=info msg="createCtr: deleting container 3eb8462cdd91fe01eba72ae1fe6861b427a22e562fb01825c0ccaeda01051ae3 from storage" id=c3acec42-19b7-408a-a0fb-f7385aa34bea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:24 functional-367186 crio[5841]: time="2025-10-08T15:02:24.248511632Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-367186_kube-system_c9f63674abedb97e40dbf72720752d59_0" id=c3acec42-19b7-408a-a0fb-f7385aa34bea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.212357781Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=472df4ac-1417-4791-a125-de54b089b79a name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.213791908Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=7cf0b156-32b4-4e30-9d98-33a0dccc0d15 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.215071389Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-367186/kube-controller-manager" id=0e4a649d-27eb-4d6d-9c08-a427ab00422f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.215459679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.220479256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.221036159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.246101333Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0e4a649d-27eb-4d6d-9c08-a427ab00422f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.248263983Z" level=info msg="createCtr: deleting container ID 26380e8011e0657f9d7cc7c77c4a194d5c4e431b8ef2f9e48a0743ad3d9ba078 from idIndex" id=0e4a649d-27eb-4d6d-9c08-a427ab00422f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.248320103Z" level=info msg="createCtr: removing container 26380e8011e0657f9d7cc7c77c4a194d5c4e431b8ef2f9e48a0743ad3d9ba078" id=0e4a649d-27eb-4d6d-9c08-a427ab00422f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.248367294Z" level=info msg="createCtr: deleting container 26380e8011e0657f9d7cc7c77c4a194d5c4e431b8ef2f9e48a0743ad3d9ba078 from storage" id=0e4a649d-27eb-4d6d-9c08-a427ab00422f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:02:25 functional-367186 crio[5841]: time="2025-10-08T15:02:25.251200926Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-367186_kube-system_db7c090bc97841dbfb9e61dc449790ad_0" id=0e4a649d-27eb-4d6d-9c08-a427ab00422f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:02:26.128963   16995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:26.129512   16995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:26.131216   16995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:26.131793   16995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1008 15:02:26.133477   16995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.415778] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:02:26 up  2:44,  0 user,  load average: 1.03, 0.25, 0.29
	Linux functional-367186 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:02:18 functional-367186 kubelet[14967]:         container etcd start failed in pod etcd-functional-367186_kube-system(79ef8396c9b4453448760b569bb6e391): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:18 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:18 functional-367186 kubelet[14967]: E1008 15:02:18.241004   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-367186" podUID="79ef8396c9b4453448760b569bb6e391"
	Oct 08 15:02:23 functional-367186 kubelet[14967]: E1008 15:02:23.277935   14967 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.212566   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.234471   14967 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-367186\" not found"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.248937   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:24 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:24 functional-367186 kubelet[14967]:  > podSandboxID="103af37cbf4c9221b295ec70e9d3c9c67c8cbc7d0f6d428cb18ada4b23a2bd33"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.249074   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:24 functional-367186 kubelet[14967]:         container kube-apiserver start failed in pod kube-apiserver-functional-367186_kube-system(c9f63674abedb97e40dbf72720752d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:24 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.249120   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-367186" podUID="c9f63674abedb97e40dbf72720752d59"
	Oct 08 15:02:24 functional-367186 kubelet[14967]: E1008 15:02:24.837341   14967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-367186?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: I1008 15:02:25.003178   14967 kubelet_node_status.go:75] "Attempting to register node" node="functional-367186"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.003818   14967 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-367186"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.211699   14967 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-367186\" not found" node="functional-367186"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.251886   14967 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:02:25 functional-367186 kubelet[14967]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:25 functional-367186 kubelet[14967]:  > podSandboxID="49d755d590c1e6c75fffb26df4018ef3af1ece9b6aef63dbe754f59f467146f3"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.252026   14967 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:02:25 functional-367186 kubelet[14967]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-367186_kube-system(db7c090bc97841dbfb9e61dc449790ad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:02:25 functional-367186 kubelet[14967]:  > logger="UnhandledError"
	Oct 08 15:02:25 functional-367186 kubelet[14967]: E1008 15:02:25.252072   14967 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-367186" podUID="db7c090bc97841dbfb9e61dc449790ad"
	Oct 08 15:02:26 functional-367186 kubelet[14967]: E1008 15:02:26.046948   14967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-367186.186c8c01e7d9a073  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-367186,UID:functional-367186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-367186 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-367186,},FirstTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,LastTimestamp:2025-10-08 14:58:14.207676531 +0000 UTC m=+0.248885077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-367186,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-367186 -n functional-367186: exit status 2 (349.225083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-367186" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-367186 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-367186 create deployment hello-node --image kicbase/echo-server: exit status 1 (58.767119ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-367186 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 service list: exit status 103 (314.575006ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-367186 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-367186"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-367186 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-367186 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-367186\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 service list -o json: exit status 103 (305.752468ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-367186 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-367186"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-367186 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 service --namespace=default --https --url hello-node: exit status 103 (322.171007ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-367186 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-367186"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-367186 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 service hello-node --url --format={{.IP}}: exit status 103 (319.370165ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-367186 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-367186"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-367186 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-367186 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-367186\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 service hello-node --url: exit status 103 (319.691433ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-367186 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-367186"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-367186 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-367186 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-367186"
functional_test.go:1579: failed to parse "* The control-plane node functional-367186 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-367186\"": parse "* The control-plane node functional-367186 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-367186\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdany-port2779261458/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759935742667385576" to /tmp/TestFunctionalparallelMountCmdany-port2779261458/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759935742667385576" to /tmp/TestFunctionalparallelMountCmdany-port2779261458/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759935742667385576" to /tmp/TestFunctionalparallelMountCmdany-port2779261458/001/test-1759935742667385576
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.785126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 15:02:23.046636   98900 retry.go:31] will retry after 307.360497ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  8 15:02 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  8 15:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  8 15:02 test-1759935742667385576
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh cat /mount-9p/test-1759935742667385576
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-367186 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-367186 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (61.448839ms)

                                                
                                                
** stderr ** 
	E1008 15:02:24.402148  142461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-367186 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (332.330814ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=33685)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  8 15:02 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  8 15:02 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  8 15:02 test-1759935742667385576
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-367186 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdany-port2779261458/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdany-port2779261458/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2779261458/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:33685
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2779261458/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdany-port2779261458/001:/mount-9p --alsologtostderr -v=1] stderr:
I1008 15:02:22.735737  141226 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:22.736078  141226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:22.736086  141226 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:22.736092  141226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:22.736408  141226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:22.736761  141226 mustload.go:65] Loading cluster: functional-367186
I1008 15:02:22.737265  141226 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:22.738055  141226 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:22.761181  141226 host.go:66] Checking if "functional-367186" exists ...
I1008 15:02:22.761543  141226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 15:02:22.858130  141226 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:22.843072229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1008 15:02:22.858548  141226 cli_runner.go:164] Run: docker network inspect functional-367186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 15:02:22.893515  141226 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2779261458/001 into VM as /mount-9p ...
I1008 15:02:22.897858  141226 out.go:179]   - Mount type:   9p
I1008 15:02:22.902589  141226 out.go:179]   - User ID:      docker
I1008 15:02:22.904024  141226 out.go:179]   - Group ID:     docker
I1008 15:02:22.905288  141226 out.go:179]   - Version:      9p2000.L
I1008 15:02:22.909774  141226 out.go:179]   - Message Size: 262144
I1008 15:02:22.911219  141226 out.go:179]   - Options:      map[]
I1008 15:02:22.912761  141226 out.go:179]   - Bind Address: 192.168.49.1:33685
I1008 15:02:22.914322  141226 out.go:179] * Userspace file server: 
I1008 15:02:22.914680  141226 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1008 15:02:22.914791  141226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:22.938304  141226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
I1008 15:02:23.054568  141226 mount.go:180] unmount for /mount-9p ran successfully
I1008 15:02:23.054620  141226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1008 15:02:23.066064  141226 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=33685,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1008 15:02:23.119636  141226 main.go:125] stdlog: ufs.go:141 connected
I1008 15:02:23.119842  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tversion tag 65535 msize 262144 version '9P2000.L'
I1008 15:02:23.119916  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rversion tag 65535 msize 262144 version '9P2000'
I1008 15:02:23.120272  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1008 15:02:23.120360  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rattach tag 0 aqid (20fa2a5 c45842cb 'd')
I1008 15:02:23.120716  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 0
I1008 15:02:23.120871  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2a5 c45842cb 'd') m d775 at 0 mt 1759935742 l 4096 t 0 d 0 ext )
I1008 15:02:23.125139  141226 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/.mount-process: {Name:mk36573cf02de7faac901c51448c35423803235d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 15:02:23.125403  141226 mount.go:105] mount successful: ""
I1008 15:02:23.127687  141226 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2779261458/001 to /mount-9p
I1008 15:02:23.130181  141226 out.go:203] 
I1008 15:02:23.131955  141226 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1008 15:02:23.989950  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 0
I1008 15:02:23.990109  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2a5 c45842cb 'd') m d775 at 0 mt 1759935742 l 4096 t 0 d 0 ext )
I1008 15:02:23.990470  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 1 
I1008 15:02:23.990527  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 
I1008 15:02:23.990678  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Topen tag 0 fid 1 mode 0
I1008 15:02:23.990753  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Ropen tag 0 qid (20fa2a5 c45842cb 'd') iounit 0
I1008 15:02:23.990925  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 0
I1008 15:02:23.991015  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2a5 c45842cb 'd') m d775 at 0 mt 1759935742 l 4096 t 0 d 0 ext )
I1008 15:02:23.991280  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 0 count 262120
I1008 15:02:23.991570  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 258
I1008 15:02:23.991741  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 258 count 261862
I1008 15:02:23.991780  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 0
I1008 15:02:23.991961  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 258 count 262120
I1008 15:02:23.992045  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 0
I1008 15:02:23.992212  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1008 15:02:23.992263  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2a9 c45842cb '') 
I1008 15:02:23.992398  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:23.992503  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2a9 c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:23.992665  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:23.992798  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2a9 c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:23.992966  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:23.993029  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:23.993211  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 2 0:'test-1759935742667385576' 
I1008 15:02:23.993269  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2aa c45842cb '') 
I1008 15:02:23.993387  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:23.993606  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('test-1759935742667385576' 'jenkins' 'balintp' '' q (20fa2aa c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:23.993746  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:23.993831  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('test-1759935742667385576' 'jenkins' 'balintp' '' q (20fa2aa c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:23.994135  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:23.994165  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:23.994320  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1008 15:02:23.994364  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2a8 c45842ca '') 
I1008 15:02:23.994497  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:23.994586  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2a8 c45842ca '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:23.994707  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:23.994782  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2a8 c45842ca '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:23.994917  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:23.994953  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:23.995128  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 258 count 262120
I1008 15:02:23.995169  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 0
I1008 15:02:23.995413  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 1
I1008 15:02:23.995470  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.329661  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 1 0:'test-1759935742667385576' 
I1008 15:02:24.329734  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2aa c45842cb '') 
I1008 15:02:24.329969  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 1
I1008 15:02:24.330098  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('test-1759935742667385576' 'jenkins' 'balintp' '' q (20fa2aa c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.330299  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 1 newfid 2 
I1008 15:02:24.330336  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 
I1008 15:02:24.330482  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Topen tag 0 fid 2 mode 0
I1008 15:02:24.331191  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Ropen tag 0 qid (20fa2aa c45842cb '') iounit 0
I1008 15:02:24.331624  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 1
I1008 15:02:24.331757  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('test-1759935742667385576' 'jenkins' 'balintp' '' q (20fa2aa c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.332268  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 2 offset 0 count 24
I1008 15:02:24.332353  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 24
I1008 15:02:24.332665  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:24.332737  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.332919  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 1
I1008 15:02:24.332951  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.727000  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 0
I1008 15:02:24.727175  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2a5 c45842cb 'd') m d775 at 0 mt 1759935742 l 4096 t 0 d 0 ext )
I1008 15:02:24.727578  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 1 
I1008 15:02:24.727648  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 
I1008 15:02:24.727803  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Topen tag 0 fid 1 mode 0
I1008 15:02:24.727881  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Ropen tag 0 qid (20fa2a5 c45842cb 'd') iounit 0
I1008 15:02:24.728061  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 0
I1008 15:02:24.728193  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2a5 c45842cb 'd') m d775 at 0 mt 1759935742 l 4096 t 0 d 0 ext )
I1008 15:02:24.728455  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 0 count 262120
I1008 15:02:24.728703  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 258
I1008 15:02:24.729224  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 258 count 261862
I1008 15:02:24.729267  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 0
I1008 15:02:24.729439  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 258 count 262120
I1008 15:02:24.729512  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 0
I1008 15:02:24.729692  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1008 15:02:24.729762  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2a9 c45842cb '') 
I1008 15:02:24.729923  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:24.730032  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2a9 c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.730196  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:24.730301  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2a9 c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.730435  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:24.730483  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.730646  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 2 0:'test-1759935742667385576' 
I1008 15:02:24.730698  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2aa c45842cb '') 
I1008 15:02:24.730891  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:24.731027  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('test-1759935742667385576' 'jenkins' 'balintp' '' q (20fa2aa c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.731185  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:24.731255  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('test-1759935742667385576' 'jenkins' 'balintp' '' q (20fa2aa c45842cb '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.731387  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:24.731422  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.731588  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1008 15:02:24.731627  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rwalk tag 0 (20fa2a8 c45842ca '') 
I1008 15:02:24.731745  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:24.731834  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2a8 c45842ca '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.731969  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tstat tag 0 fid 2
I1008 15:02:24.732058  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2a8 c45842ca '') m 644 at 0 mt 1759935742 l 24 t 0 d 0 ext )
I1008 15:02:24.732186  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 2
I1008 15:02:24.732212  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.732391  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tread tag 0 fid 1 offset 258 count 262120
I1008 15:02:24.732423  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rread tag 0 count 0
I1008 15:02:24.732619  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 1
I1008 15:02:24.732670  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:24.733844  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1008 15:02:24.733901  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rerror tag 0 ename 'file not found' ecode 0
I1008 15:02:25.036546  141226 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:50832 Tclunk tag 0 fid 0
I1008 15:02:25.036598  141226 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:50832 Rclunk tag 0
I1008 15:02:25.037141  141226 main.go:125] stdlog: ufs.go:147 disconnected
I1008 15:02:25.057035  141226 out.go:179] * Unmounting /mount-9p ...
I1008 15:02:25.058378  141226 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1008 15:02:25.067790  141226 mount.go:180] unmount for /mount-9p ran successfully
I1008 15:02:25.067917  141226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/.mount-process: {Name:mk36573cf02de7faac901c51448c35423803235d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 15:02:25.069400  141226 out.go:203] 
W1008 15:02:25.071051  141226 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1008 15:02:25.072652  141226 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-367186" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-367186" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1008 15:02:27.088568  144586 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:27.088842  144586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:27.088854  144586 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:27.088858  144586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:27.089112  144586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:27.089596  144586 mustload.go:65] Loading cluster: functional-367186
I1008 15:02:27.090048  144586 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:27.090531  144586 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:27.112732  144586 host.go:66] Checking if "functional-367186" exists ...
I1008 15:02:27.113059  144586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 15:02:27.224584  144586 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-08 15:02:27.211875732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1008 15:02:27.225491  144586 api_server.go:166] Checking apiserver status ...
I1008 15:02:27.225545  144586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1008 15:02:27.225622  144586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:27.254178  144586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
W1008 15:02:27.399708  144586 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1008 15:02:27.401369  144586 out.go:179] * The control-plane node functional-367186 apiserver is not running: (state=Stopped)
I1008 15:02:27.402949  144586 out.go:179]   To start a cluster, run: "minikube start -p functional-367186"

                                                
                                                
stdout: * The control-plane node functional-367186 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-367186"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 144587: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-367186 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-367186 apply -f testdata/testsvc.yaml: exit status 1 (69.788403ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-367186 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (71.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1008 15:02:27.590007   98900 retry.go:31] will retry after 1.874006463s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-367186 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-367186 get svc nginx-svc: exit status 1 (53.421228ms)

                                                
                                                
** stderr ** 
	E1008 15:03:39.011359  149644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:03:39.011854  149644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:03:39.013364  149644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:03:39.013787  149644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1008 15:03:39.015248  149644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-367186 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (71.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-367186
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image load --daemon kicbase/echo-server:functional-367186 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-367186" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image save kicbase/echo-server:functional-367186 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
I1008 15:02:29.464378   98900 retry.go:31] will retry after 3.066934392s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1008 15:02:30.224106  146473 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:02:30.224252  146473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:30.224263  146473 out.go:374] Setting ErrFile to fd 2...
	I1008 15:02:30.224267  146473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:30.224511  146473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:02:30.225122  146473 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:30.225216  146473 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:30.225595  146473 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
	I1008 15:02:30.244100  146473 ssh_runner.go:195] Run: systemctl --version
	I1008 15:02:30.244144  146473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
	I1008 15:02:30.262095  146473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
	I1008 15:02:30.367636  146473 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1008 15:02:30.367726  146473 cache_images.go:254] Failed to load cached images for "functional-367186": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1008 15:02:30.367757  146473 cache_images.go:266] failed pushing to: functional-367186

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-367186
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image save --daemon kicbase/echo-server:functional-367186 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-367186
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-367186: exit status 1 (19.985251ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-367186

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-367186

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (502.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1008 15:07:26.912490   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:26.918988   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:26.930585   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:26.952026   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:26.993555   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:27.075090   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:27.236726   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:27.558536   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:28.200656   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:29.482195   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:32.045392   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:37.167135   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:07:47.408929   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:08:07.891251   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:08:48.854342   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:10:10.778638   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:12:26.911664   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:12:54.627185   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m20.643452905s)

                                                
                                                
-- stdout --
	* [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	* 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	* 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	* 
	I1008 15:14:51.183310  151549 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (305.511583ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:14:51.546131  156755 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-367186 ssh sudo cat /etc/test/nested/copy/98900/hosts                                                                                                │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image save kicbase/echo-server:functional-367186 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image rm kicbase/echo-server:functional-367186 --alsologtostderr                                                                              │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image save --daemon kicbase/echo-server:functional-367186 --alsologtostderr                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ start          │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start          │ -p functional-367186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ start          │ -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ addons         │ functional-367186 addons list                                                                                                                                   │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ addons         │ functional-367186 addons list -o json                                                                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ dashboard      │ --url --port 36195 -p functional-367186 --alsologtostderr -v=1                                                                                                  │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ update-context │ functional-367186 update-context --alsologtostderr -v=2                                                                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ update-context │ functional-367186 update-context --alsologtostderr -v=2                                                                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ update-context │ functional-367186 update-context --alsologtostderr -v=2                                                                                                         │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls --format short --alsologtostderr                                                                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh            │ functional-367186 ssh pgrep buildkitd                                                                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image          │ functional-367186 image ls --format yaml --alsologtostderr                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr                                                          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls --format json --alsologtostderr                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls --format table --alsologtostderr                                                                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image          │ functional-367186 image ls                                                                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete         │ -p functional-367186                                                                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start          │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                                 │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:14:41 ha-430216 crio[778]: time="2025-10-08T15:14:41.439774859Z" level=info msg="createCtr: removing container dde85f15779b0d3a2c51a9d70a897cbc12688958687e29c8f8aa51962fcd035a" id=a2cdf238-9854-4a02-a8b6-8ad9d7df3a1a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:41 ha-430216 crio[778]: time="2025-10-08T15:14:41.439814129Z" level=info msg="createCtr: deleting container dde85f15779b0d3a2c51a9d70a897cbc12688958687e29c8f8aa51962fcd035a from storage" id=a2cdf238-9854-4a02-a8b6-8ad9d7df3a1a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:41 ha-430216 crio[778]: time="2025-10-08T15:14:41.441989824Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=a2cdf238-9854-4a02-a8b6-8ad9d7df3a1a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.41366711Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=17a1fb7b-2233-47fd-aa0b-470800baa9ae name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.41454021Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d41cb739-ff52-41d5-af9d-6aa503ad85f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.415502832Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=eaa4fd9e-6bcf-48b1-ae3a-f3d54b76c9f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.415726017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.419100746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.419550743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.434848455Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=eaa4fd9e-6bcf-48b1-ae3a-f3d54b76c9f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.43627114Z" level=info msg="createCtr: deleting container ID e2787040e4552720328adcb5fc43a1f8874b08b3c0692cb50ab0fdc444ab6cf0 from idIndex" id=eaa4fd9e-6bcf-48b1-ae3a-f3d54b76c9f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.4363121Z" level=info msg="createCtr: removing container e2787040e4552720328adcb5fc43a1f8874b08b3c0692cb50ab0fdc444ab6cf0" id=eaa4fd9e-6bcf-48b1-ae3a-f3d54b76c9f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.43634451Z" level=info msg="createCtr: deleting container e2787040e4552720328adcb5fc43a1f8874b08b3c0692cb50ab0fdc444ab6cf0 from storage" id=eaa4fd9e-6bcf-48b1-ae3a-f3d54b76c9f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:45 ha-430216 crio[778]: time="2025-10-08T15:14:45.438477397Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=eaa4fd9e-6bcf-48b1-ae3a-f3d54b76c9f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.413204725Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=df1b3b44-d664-4c96-b9e2-680f10b560ea name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.414185768Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=dd033303-ce61-4e67-81a8-dbd745105c48 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.41520133Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=38c6ee6b-8160-4eca-92d0-cd53a47a1593 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.415404244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.418685557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.41912045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.435917486Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=38c6ee6b-8160-4eca-92d0-cd53a47a1593 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.437340879Z" level=info msg="createCtr: deleting container ID ab3cbbbf7a266f8e599cbc8ac3844ef50ee3262c1755ff799c25665e71c777f0 from idIndex" id=38c6ee6b-8160-4eca-92d0-cd53a47a1593 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.437378538Z" level=info msg="createCtr: removing container ab3cbbbf7a266f8e599cbc8ac3844ef50ee3262c1755ff799c25665e71c777f0" id=38c6ee6b-8160-4eca-92d0-cd53a47a1593 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.437410799Z" level=info msg="createCtr: deleting container ab3cbbbf7a266f8e599cbc8ac3844ef50ee3262c1755ff799c25665e71c777f0 from storage" id=38c6ee6b-8160-4eca-92d0-cd53a47a1593 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:14:49 ha-430216 crio[778]: time="2025-10-08T15:14:49.439759995Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=38c6ee6b-8160-4eca-92d0-cd53a47a1593 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:52.146557    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:52.147063    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:52.148591    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:52.149113    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:52.150674    2745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:14:52 up  2:57,  0 user,  load average: 0.08, 0.08, 0.16
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:14:41 ha-430216 kubelet[1982]: E1008 15:14:41.442482    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:14:41 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:14:41 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:14:41 ha-430216 kubelet[1982]: E1008 15:14:41.442521    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:14:45 ha-430216 kubelet[1982]: E1008 15:14:45.413093    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:14:45 ha-430216 kubelet[1982]: E1008 15:14:45.438815    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:14:45 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:14:45 ha-430216 kubelet[1982]:  > podSandboxID="53772938dd72b0704ce7f5196ea9e84ad454215649feb01984fd33ff782177e3"
	Oct 08 15:14:45 ha-430216 kubelet[1982]: E1008 15:14:45.438931    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:14:45 ha-430216 kubelet[1982]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:14:45 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:14:45 ha-430216 kubelet[1982]: E1008 15:14:45.438961    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:14:47 ha-430216 kubelet[1982]: E1008 15:14:47.034080    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:14:47 ha-430216 kubelet[1982]: I1008 15:14:47.189364    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:14:47 ha-430216 kubelet[1982]: E1008 15:14:47.189815    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:14:47 ha-430216 kubelet[1982]: E1008 15:14:47.535551    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca18b2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-430216 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406115506 +0000 UTC m=+0.673145554,LastTimestamp:2025-10-08 15:10:50.406115506 +0000 UTC m=+0.673145554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:14:49 ha-430216 kubelet[1982]: E1008 15:14:49.412593    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:14:49 ha-430216 kubelet[1982]: E1008 15:14:49.440122    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:14:49 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:14:49 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:14:49 ha-430216 kubelet[1982]: E1008 15:14:49.440223    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:14:49 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:14:49 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:14:49 ha-430216 kubelet[1982]: E1008 15:14:49.440259    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:14:50 ha-430216 kubelet[1982]: E1008 15:14:50.426978    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (306.395514ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:14:52.544355  157086 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (502.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (96.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (95.251175ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-430216" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- rollout status deployment/busybox: exit status 1 (94.557716ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.497141ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:14:52.842597   98900 retry.go:31] will retry after 1.441336533s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.886468ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:14:54.379289   98900 retry.go:31] will retry after 1.877705963s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.833753ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:14:56.352394   98900 retry.go:31] will retry after 1.425667776s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.270188ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:14:57.873071   98900 retry.go:31] will retry after 3.367803134s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.425218ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:15:01.342209   98900 retry.go:31] will retry after 7.494429277s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.061772ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:15:08.933060   98900 retry.go:31] will retry after 7.789847255s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.357377ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:15:16.826564   98900 retry.go:31] will retry after 16.765884127s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.580907ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:15:33.689113   98900 retry.go:31] will retry after 19.203741035s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.6427ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 15:15:52.994521   98900 retry.go:31] will retry after 33.830357436s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.592964ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (92.840212ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (92.395663ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (94.092246ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (90.841209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (302.755145ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:27.604422  158055 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format short --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ ssh     │ functional-367186 ssh pgrep buildkitd                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image   │ functional-367186 image ls --format yaml --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439507639Z" level=info msg="createCtr: removing container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439544791Z" level=info msg="createCtr: deleting container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378 from storage" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.441950194Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.412927114Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c89a34e6-8f70-4aa2-b4c7-fbe815696d76 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.414896715Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3dedfa25-289d-40ce-b86e-ff3c388716ca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.415797667Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.416057282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419275293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419779166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.437693556Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439076432Z" level=info msg="createCtr: deleting container ID eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from idIndex" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439114836Z" level=info msg="createCtr: removing container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439146235Z" level=info msg="createCtr: deleting container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from storage" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.441195501Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.413167379Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=66ad4ef4-2293-4726-bdaa-e82870344008 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414014396Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=94fdd9cc-fb5f-4d79-802c-8d7a00f80cb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414902679Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.415127356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.419267751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.420117579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.435493111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436880142Z" level=info msg="createCtr: deleting container ID 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from idIndex" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:28.182986    3080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:28.183582    3080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:28.185213    3080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:28.185728    3080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:28.187267    3080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:28 up  2:58,  0 user,  load average: 0.10, 0.07, 0.15
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:21 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:21 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:21 ha-430216 kubelet[1982]: E1008 15:16:21.442439    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.412369    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441507    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441611    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441643    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.154614    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.374352    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-430216&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.412725    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439324    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > podSandboxID="41af8cf12376e9d30f8ae1968d47ed16c7dc1929f6b4bab8480c3eeb863d9025"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439456    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439495    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (295.425432ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:28.564086  158381 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (96.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (93.772684ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-430216"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (292.563951ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:28.969143  158528 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-367186 ssh pgrep buildkitd                                                                           │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │                     │
	│ image   │ functional-367186 image ls --format yaml --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439507639Z" level=info msg="createCtr: removing container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439544791Z" level=info msg="createCtr: deleting container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378 from storage" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.441950194Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.412927114Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c89a34e6-8f70-4aa2-b4c7-fbe815696d76 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.414896715Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3dedfa25-289d-40ce-b86e-ff3c388716ca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.415797667Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.416057282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419275293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419779166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.437693556Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439076432Z" level=info msg="createCtr: deleting container ID eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from idIndex" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439114836Z" level=info msg="createCtr: removing container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439146235Z" level=info msg="createCtr: deleting container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from storage" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.441195501Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.413167379Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=66ad4ef4-2293-4726-bdaa-e82870344008 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414014396Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=94fdd9cc-fb5f-4d79-802c-8d7a00f80cb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414902679Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.415127356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.419267751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.420117579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.435493111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436880142Z" level=info msg="createCtr: deleting container ID 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from idIndex" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:29.558535    3237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:29.559183    3237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:29.560789    3237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:29.561408    3237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:29.562892    3237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:29 up  2:59,  0 user,  load average: 0.10, 0.07, 0.15
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:21 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:21 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:21 ha-430216 kubelet[1982]: E1008 15:16:21.442439    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.412369    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441507    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441611    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441643    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.154614    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.374352    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-430216&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.412725    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439324    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > podSandboxID="41af8cf12376e9d30f8ae1968d47ed16c7dc1929f6b4bab8480c3eeb863d9025"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439456    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439495    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (294.286407ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:29.935336  158856 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 node add --alsologtostderr -v 5: exit status 103 (254.20969ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-430216 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-430216"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:29.993739  158986 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:29.994012  158986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:29.994022  158986 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:29.994026  158986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:29.994200  158986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:29.994522  158986 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:29.994857  158986 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:29.995239  158986 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:30.012030  158986 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:30.012337  158986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:30.072167  158986 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:30.061827061 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:16:30.072267  158986 api_server.go:166] Checking apiserver status ...
	I1008 15:16:30.072312  158986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:16:30.072349  158986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:30.089924  158986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	W1008 15:16:30.196291  158986 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:30.198244  158986 out.go:179] * The control-plane node ha-430216 apiserver is not running: (state=Stopped)
	I1008 15:16:30.199680  158986 out.go:179]   To start a cluster, run: "minikube start -p ha-430216"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-430216 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (289.932676ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:30.499661  159093 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format yaml --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439507639Z" level=info msg="createCtr: removing container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439544791Z" level=info msg="createCtr: deleting container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378 from storage" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.441950194Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.412927114Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c89a34e6-8f70-4aa2-b4c7-fbe815696d76 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.414896715Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3dedfa25-289d-40ce-b86e-ff3c388716ca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.415797667Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.416057282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419275293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419779166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.437693556Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439076432Z" level=info msg="createCtr: deleting container ID eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from idIndex" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439114836Z" level=info msg="createCtr: removing container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439146235Z" level=info msg="createCtr: deleting container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from storage" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.441195501Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.413167379Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=66ad4ef4-2293-4726-bdaa-e82870344008 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414014396Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=94fdd9cc-fb5f-4d79-802c-8d7a00f80cb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414902679Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.415127356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.419267751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.420117579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.435493111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436880142Z" level=info msg="createCtr: deleting container ID 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from idIndex" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:31.083043    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:31.083594    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:31.085167    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:31.085556    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:31.086920    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:31 up  2:59,  0 user,  load average: 0.10, 0.07, 0.15
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:21 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:21 ha-430216 kubelet[1982]: E1008 15:16:21.442439    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.412369    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441507    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441611    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441643    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.154614    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.374352    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-430216&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.412725    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439324    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > podSandboxID="41af8cf12376e9d30f8ae1968d47ed16c7dc1929f6b4bab8480c3eeb863d9025"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439456    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439495    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:16:30 ha-430216 kubelet[1982]: E1008 15:16:30.433681    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (292.237429ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:31.455376  159418 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-430216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-430216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (44.656021ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-430216

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-430216 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-430216 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (288.33578ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:31.807061  159553 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format yaml --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439507639Z" level=info msg="createCtr: removing container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.439544791Z" level=info msg="createCtr: deleting container 42cf6cb1eb7742ccbb00510a284b89c99c0476de00e1ac658326a800c1fe7378 from storage" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:21 ha-430216 crio[778]: time="2025-10-08T15:16:21.441950194Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=0d09def5-40ee-40fb-a7b4-8ad5a0a6604e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.412927114Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c89a34e6-8f70-4aa2-b4c7-fbe815696d76 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.414896715Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3dedfa25-289d-40ce-b86e-ff3c388716ca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.415797667Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.416057282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419275293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.419779166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.437693556Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439076432Z" level=info msg="createCtr: deleting container ID eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from idIndex" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439114836Z" level=info msg="createCtr: removing container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.439146235Z" level=info msg="createCtr: deleting container eb4185537daf1bd74f9763058f462726dfec996e29ee3a334ac17e82a67a2d9a from storage" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:23 ha-430216 crio[778]: time="2025-10-08T15:16:23.441195501Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=05261bbb-bc05-420b-9b95-b17528c34bad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.413167379Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=66ad4ef4-2293-4726-bdaa-e82870344008 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414014396Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=94fdd9cc-fb5f-4d79-802c-8d7a00f80cb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.414902679Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.415127356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.419267751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.420117579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.435493111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436880142Z" level=info msg="createCtr: deleting container ID 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from idIndex" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:32.400715    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:32.401339    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:32.402927    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:32.403569    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:32.405105    3556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:32 up  2:59,  0 user,  load average: 0.33, 0.12, 0.17
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441507    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441611    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:23 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:23 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:23 ha-430216 kubelet[1982]: E1008 15:16:23.441643    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.154614    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.374352    1982 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-430216&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.412725    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439324    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > podSandboxID="41af8cf12376e9d30f8ae1968d47ed16c7dc1929f6b4bab8480c3eeb863d9025"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439456    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:24 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:24 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439495    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:16:30 ha-430216 kubelet[1982]: E1008 15:16:30.433681    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.050770    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: I1008 15:16:32.222717    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.223142    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (294.440756ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:32.782070  159879 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-430216" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-430216" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (297.673406ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:33.413173  160128 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format yaml --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.415527216Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c45635d8-bdfa-4c4f-b23a-fe85af1d1f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.415533521Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=008088be-6210-4ad2-8d2b-154e25cc879f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.416526641Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b34aefb0-d9fb-4727-9b05-eeb8d8d9e05e name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.416553564Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=944b7402-9fd2-40cb-93e5-b3d95a3edcbf name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.41738619Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417525807Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-430216/kube-apiserver" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417633437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417786319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.421434077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.422107208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.425146313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.426251892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.441903776Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.442793412Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443625173Z" level=info msg="createCtr: deleting container ID 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d from idIndex" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443666036Z" level=info msg="createCtr: removing container 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443696713Z" level=info msg="createCtr: deleting container 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d from storage" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444232866Z" level=info msg="createCtr: deleting container ID 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from idIndex" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444281535Z" level=info msg="createCtr: removing container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444323296Z" level=info msg="createCtr: deleting container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from storage" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447483959Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447821166Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:34.009374    3738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:34.009982    3738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:34.011550    3738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:34.011998    3738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:34.013588    3738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:34 up  2:59,  0 user,  load average: 0.33, 0.12, 0.17
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439495    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:16:30 ha-430216 kubelet[1982]: E1008 15:16:30.433681    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.050770    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: I1008 15:16:32.222717    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.223142    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.414988    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.415109    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447799    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="53772938dd72b0704ce7f5196ea9e84ad454215649feb01984fd33ff782177e3"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447922    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447957    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448064    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="ef2159d8908409dc5cc1b61806acc0ee2a98b6813d1fce3744421ac822ba444b"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448115    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.449010    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (291.070414ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:34.387292  160470 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --output json --alsologtostderr -v 5: exit status 6 (295.611785ms)

                                                
                                                
-- stdout --
	{"Name":"ha-430216","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:34.445580  160583 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:34.445838  160583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:34.445847  160583 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:34.445851  160583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:34.446049  160583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:34.446223  160583 out.go:368] Setting JSON to true
	I1008 15:16:34.446251  160583 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:34.446393  160583 notify.go:220] Checking for updates...
	I1008 15:16:34.446572  160583 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:34.446585  160583 status.go:174] checking status of ha-430216 ...
	I1008 15:16:34.447005  160583 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:34.465093  160583 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:34.465146  160583 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:34.465530  160583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:34.484221  160583 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:34.484487  160583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:34.484530  160583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:34.501934  160583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:34.602752  160583 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:34.609335  160583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:34.622008  160583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:34.682653  160583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:34.672515678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:34.683207  160583 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:34.683238  160583 api_server.go:166] Checking apiserver status ...
	I1008 15:16:34.683278  160583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:34.693798  160583 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:34.693827  160583 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:34.693839  160583 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430216 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (295.982126ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:34.998662  160708 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format yaml --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.415527216Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c45635d8-bdfa-4c4f-b23a-fe85af1d1f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.415533521Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=008088be-6210-4ad2-8d2b-154e25cc879f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.416526641Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b34aefb0-d9fb-4727-9b05-eeb8d8d9e05e name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.416553564Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=944b7402-9fd2-40cb-93e5-b3d95a3edcbf name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.41738619Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417525807Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-430216/kube-apiserver" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417633437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417786319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.421434077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.422107208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.425146313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.426251892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.441903776Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.442793412Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443625173Z" level=info msg="createCtr: deleting container ID 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d from idIndex" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443666036Z" level=info msg="createCtr: removing container 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443696713Z" level=info msg="createCtr: deleting container 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d from storage" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444232866Z" level=info msg="createCtr: deleting container ID 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from idIndex" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444281535Z" level=info msg="createCtr: removing container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444323296Z" level=info msg="createCtr: deleting container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from storage" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447483959Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447821166Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:35.597668    3912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:35.598299    3912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:35.600008    3912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:35.600509    3912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:35.602075    3912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:35 up  2:59,  0 user,  load average: 0.33, 0.12, 0.17
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:24 ha-430216 kubelet[1982]: E1008 15:16:24.439495    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:16:30 ha-430216 kubelet[1982]: E1008 15:16:30.433681    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.050770    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: I1008 15:16:32.222717    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.223142    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.414988    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.415109    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447799    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="53772938dd72b0704ce7f5196ea9e84ad454215649feb01984fd33ff782177e3"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447922    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447957    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448064    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="ef2159d8908409dc5cc1b61806acc0ee2a98b6813d1fce3744421ac822ba444b"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448115    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.449010    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (296.65115ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:35.980518  161059 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 node stop m02 --alsologtostderr -v 5: exit status 85 (71.64392ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:36.046389  161170 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:36.046695  161170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:36.046707  161170 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:36.046712  161170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:36.046945  161170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:36.047234  161170 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:36.047614  161170 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:36.053419  161170 out.go:203] 
	W1008 15:16:36.054931  161170 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1008 15:16:36.054948  161170 out.go:285] * 
	* 
	W1008 15:16:36.060421  161170 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:16:36.062244  161170 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-430216 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (291.720029ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:36.112171  161181 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:36.112403  161181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:36.112411  161181 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:36.112415  161181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:36.112649  161181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:36.112837  161181 out.go:368] Setting JSON to false
	I1008 15:16:36.112864  161181 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:36.112987  161181 notify.go:220] Checking for updates...
	I1008 15:16:36.113225  161181 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:36.113240  161181 status.go:174] checking status of ha-430216 ...
	I1008 15:16:36.113667  161181 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:36.131915  161181 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:36.131951  161181 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:36.132220  161181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:36.148276  161181 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:36.148581  161181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:36.148635  161181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:36.165898  161181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:36.265920  161181 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:36.272383  161181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:36.284586  161181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:36.343316  161181 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:36.333380062 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:36.343797  161181 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:36.343830  161181 api_server.go:166] Checking apiserver status ...
	I1008 15:16:36.343878  161181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:36.354924  161181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:36.354956  161181 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:36.354967  161181 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (292.820445ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:36.657237  161305 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                                                  │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436916733Z" level=info msg="createCtr: removing container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.436949518Z" level=info msg="createCtr: deleting container 3fc5bda2f4bc191f877d25aa72273a881b01bd7c7ddf34d3e80e43a2e4b0a054 from storage" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:24 ha-430216 crio[778]: time="2025-10-08T15:16:24.439020802Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ddee30bb-90a6-4fc5-8e36-8a979b90aee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.415527216Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c45635d8-bdfa-4c4f-b23a-fe85af1d1f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.415533521Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=008088be-6210-4ad2-8d2b-154e25cc879f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.416526641Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b34aefb0-d9fb-4727-9b05-eeb8d8d9e05e name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.416553564Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=944b7402-9fd2-40cb-93e5-b3d95a3edcbf name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.41738619Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417525807Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-430216/kube-apiserver" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417633437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.417786319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.421434077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.422107208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.425146313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.426251892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.441903776Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.442793412Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443625173Z" level=info msg="createCtr: deleting container ID 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d from idIndex" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443666036Z" level=info msg="createCtr: removing container 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.443696713Z" level=info msg="createCtr: deleting container 340da3c01a025727d3c024dcb53a1172241ddd62cd2b05d790e27dbf85f14c7d from storage" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444232866Z" level=info msg="createCtr: deleting container ID 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from idIndex" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444281535Z" level=info msg="createCtr: removing container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444323296Z" level=info msg="createCtr: deleting container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from storage" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447483959Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447821166Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:37.241347    4081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:37.241900    4081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:37.243476    4081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:37.243843    4081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:37.245193    4081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:37 up  2:59,  0 user,  load average: 0.38, 0.13, 0.17
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.050070    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: I1008 15:16:25.220884    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:25 ha-430216 kubelet[1982]: E1008 15:16:25.221314    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:26 ha-430216 kubelet[1982]: E1008 15:16:26.664263    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:16:30 ha-430216 kubelet[1982]: E1008 15:16:30.433681    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.050770    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: I1008 15:16:32.222717    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:16:32 ha-430216 kubelet[1982]: E1008 15:16:32.223142    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.414988    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.415109    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447799    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="53772938dd72b0704ce7f5196ea9e84ad454215649feb01984fd33ff782177e3"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447922    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447957    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448064    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="ef2159d8908409dc5cc1b61806acc0ee2a98b6813d1fce3744421ac822ba444b"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448115    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.449010    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:16:36 ha-430216 kubelet[1982]: E1008 15:16:36.665867    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (303.098976ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:37.625587  161634 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-430216" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (287.281932ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:38.241534  161896 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr          │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                                                  │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.444323296Z" level=info msg="createCtr: deleting container 57f765fc2adb5be16806f927fffe61bee6368d7b83df869ed218ee621655acbf from storage" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447483959Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=464860c7-58c6-435b-be34-74876429802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:33 ha-430216 crio[778]: time="2025-10-08T15:16:33.447821166Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=097695c9-88e3-4819-b7c2-74e2a62336d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.413332018Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ae9e7797-9a42-4d6c-8e84-88f21a2e114d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.413400165Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c649eef1-3fc0-4ee5-b41b-cd0680260815 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.414424475Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=465d0ecf-98e7-4731-a85e-179e965ee0e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.414463555Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=2f553994-6cd1-40de-9062-bb6dd3f491b3 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.415418361Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=80e95b78-2634-4a1e-92f5-602a856c3c11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.415557142Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=61eeb610-7d62-4acd-8f7c-1be455d4a252 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.415688021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.41578443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.421754196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.422353718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.42275593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.423266582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.442074374Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=61eeb610-7d62-4acd-8f7c-1be455d4a252 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.443915771Z" level=info msg="createCtr: deleting container ID 129d891d1e3713ac96736f45d7ceed80cd50e1aa1ccfd1a0526eae40cc3a3c0e from idIndex" id=61eeb610-7d62-4acd-8f7c-1be455d4a252 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.443963281Z" level=info msg="createCtr: removing container 129d891d1e3713ac96736f45d7ceed80cd50e1aa1ccfd1a0526eae40cc3a3c0e" id=61eeb610-7d62-4acd-8f7c-1be455d4a252 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.444003688Z" level=info msg="createCtr: deleting container 129d891d1e3713ac96736f45d7ceed80cd50e1aa1ccfd1a0526eae40cc3a3c0e from storage" id=61eeb610-7d62-4acd-8f7c-1be455d4a252 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.444354888Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=80e95b78-2634-4a1e-92f5-602a856c3c11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.445889481Z" level=info msg="createCtr: deleting container ID 40feb6e05c1cbde6f668d27a7137c43ceb5c98752bb4b2049ae01852ed470b90 from idIndex" id=80e95b78-2634-4a1e-92f5-602a856c3c11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.445923008Z" level=info msg="createCtr: removing container 40feb6e05c1cbde6f668d27a7137c43ceb5c98752bb4b2049ae01852ed470b90" id=80e95b78-2634-4a1e-92f5-602a856c3c11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.445951062Z" level=info msg="createCtr: deleting container 40feb6e05c1cbde6f668d27a7137c43ceb5c98752bb4b2049ae01852ed470b90 from storage" id=80e95b78-2634-4a1e-92f5-602a856c3c11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.449243751Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=61eeb610-7d62-4acd-8f7c-1be455d4a252 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:16:37 ha-430216 crio[778]: time="2025-10-08T15:16:37.4497347Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=80e95b78-2634-4a1e-92f5-602a856c3c11 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:16:38.824217    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:38.824846    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:38.826530    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:38.826939    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:16:38.828510    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:16:38 up  2:59,  0 user,  load average: 0.38, 0.13, 0.17
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.447957    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448064    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > podSandboxID="ef2159d8908409dc5cc1b61806acc0ee2a98b6813d1fce3744421ac822ba444b"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.448115    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:33 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:33 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:33 ha-430216 kubelet[1982]: E1008 15:16:33.449010    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:16:36 ha-430216 kubelet[1982]: E1008 15:16:36.665867    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.412745    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.412938    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.449603    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:37 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:37 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.449726    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:37 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:37 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.449770    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.450003    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:16:37 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:37 ha-430216 kubelet[1982]:  > podSandboxID="41af8cf12376e9d30f8ae1968d47ed16c7dc1929f6b4bab8480c3eeb863d9025"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.450086    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:16:37 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:16:37 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:16:37 ha-430216 kubelet[1982]: E1008 15:16:37.451249    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (297.968378ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:16:39.208936  162215 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 node start m02 --alsologtostderr -v 5: exit status 85 (69.395158ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:39.269319  162328 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:39.269593  162328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:39.269604  162328 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:39.269609  162328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:39.269871  162328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:39.270237  162328 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:39.270641  162328 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:39.276785  162328 out.go:203] 
	W1008 15:16:39.281868  162328 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1008 15:16:39.281887  162328 out.go:285] * 
	* 
	W1008 15:16:39.286943  162328 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:16:39.288697  162328 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1008 15:16:39.269319  162328 out.go:360] Setting OutFile to fd 1 ...
I1008 15:16:39.269593  162328 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:16:39.269604  162328 out.go:374] Setting ErrFile to fd 2...
I1008 15:16:39.269609  162328 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:16:39.269871  162328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:16:39.270237  162328 mustload.go:65] Loading cluster: ha-430216
I1008 15:16:39.270641  162328 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:16:39.276785  162328 out.go:203] 
W1008 15:16:39.281868  162328 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1008 15:16:39.281887  162328 out.go:285] * 
* 
W1008 15:16:39.286943  162328 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1008 15:16:39.288697  162328 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-430216 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (292.806479ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:39.337580  162339 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:39.337832  162339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:39.337840  162339 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:39.337844  162339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:39.338025  162339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:39.338210  162339 out.go:368] Setting JSON to false
	I1008 15:16:39.338239  162339 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:39.338335  162339 notify.go:220] Checking for updates...
	I1008 15:16:39.338556  162339 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:39.338569  162339 status.go:174] checking status of ha-430216 ...
	I1008 15:16:39.338995  162339 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:39.359310  162339 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:39.359342  162339 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:39.359698  162339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:39.379301  162339 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:39.379604  162339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:39.379644  162339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:39.397293  162339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:39.496845  162339 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:39.503019  162339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:39.515655  162339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:39.571532  162339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:39.562136598 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:39.571959  162339 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:39.571987  162339 api_server.go:166] Checking apiserver status ...
	I1008 15:16:39.572027  162339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:39.582085  162339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:39.582103  162339 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:39.582113  162339 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:16:39.586958   98900 retry.go:31] will retry after 584.209321ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (287.197717ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:40.214823  162470 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:40.215093  162470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:40.215104  162470 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:40.215111  162470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:40.215326  162470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:40.215536  162470 out.go:368] Setting JSON to false
	I1008 15:16:40.215575  162470 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:40.215715  162470 notify.go:220] Checking for updates...
	I1008 15:16:40.215947  162470 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:40.215965  162470 status.go:174] checking status of ha-430216 ...
	I1008 15:16:40.216430  162470 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:40.233399  162470 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:40.233429  162470 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:40.233751  162470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:40.252192  162470 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:40.252479  162470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:40.252529  162470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:40.269608  162470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:40.370871  162470 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:40.377238  162470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:40.389768  162470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:40.443774  162470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:40.434510416 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:40.444220  162470 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:40.444244  162470 api_server.go:166] Checking apiserver status ...
	I1008 15:16:40.444277  162470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:40.454639  162470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:40.454663  162470 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:40.454673  162470 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:16:40.459217   98900 retry.go:31] will retry after 928.754128ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (290.680393ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:41.432590  162587 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:41.432905  162587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:41.432916  162587 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:41.432921  162587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:41.433128  162587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:41.433316  162587 out.go:368] Setting JSON to false
	I1008 15:16:41.433346  162587 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:41.433519  162587 notify.go:220] Checking for updates...
	I1008 15:16:41.433824  162587 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:41.433849  162587 status.go:174] checking status of ha-430216 ...
	I1008 15:16:41.434402  162587 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:41.452838  162587 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:41.452869  162587 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:41.453228  162587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:41.471288  162587 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:41.471567  162587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:41.471621  162587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:41.488591  162587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:41.588762  162587 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:41.595512  162587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:41.607945  162587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:41.662727  162587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:41.651845498 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:41.663489  162587 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:41.663527  162587 api_server.go:166] Checking apiserver status ...
	I1008 15:16:41.663576  162587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:41.674132  162587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:41.674153  162587 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:41.674165  162587 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:16:41.679269   98900 retry.go:31] will retry after 1.82829017s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (293.781259ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:43.552537  162700 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:43.552826  162700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:43.552836  162700 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:43.552840  162700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:43.553098  162700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:43.553323  162700 out.go:368] Setting JSON to false
	I1008 15:16:43.553358  162700 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:43.553454  162700 notify.go:220] Checking for updates...
	I1008 15:16:43.553836  162700 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:43.553853  162700 status.go:174] checking status of ha-430216 ...
	I1008 15:16:43.554312  162700 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:43.571526  162700 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:43.571558  162700 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:43.571888  162700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:43.589949  162700 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:43.590313  162700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:43.590370  162700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:43.607964  162700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:43.708972  162700 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:43.715750  162700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:43.728346  162700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:43.787608  162700 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:43.777195404 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:43.788162  162700 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:43.788192  162700 api_server.go:166] Checking apiserver status ...
	I1008 15:16:43.788234  162700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:43.798746  162700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:43.798766  162700 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:43.798776  162700 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:16:43.803319   98900 retry.go:31] will retry after 4.96896698s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (292.983196ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:48.818247  162841 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:48.818528  162841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:48.818537  162841 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:48.818541  162841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:48.818737  162841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:48.818914  162841 out.go:368] Setting JSON to false
	I1008 15:16:48.818943  162841 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:48.819096  162841 notify.go:220] Checking for updates...
	I1008 15:16:48.819267  162841 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:48.819281  162841 status.go:174] checking status of ha-430216 ...
	I1008 15:16:48.819712  162841 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:48.838065  162841 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:48.838113  162841 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:48.838395  162841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:48.855713  162841 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:48.855975  162841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:48.856014  162841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:48.872998  162841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:48.973778  162841 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:48.980159  162841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:48.992352  162841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:49.050930  162841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:49.040148249 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:49.051366  162841 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:49.051394  162841 api_server.go:166] Checking apiserver status ...
	I1008 15:16:49.051427  162841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:49.062257  162841 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:49.062278  162841 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:49.062288  162841 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:16:49.067540   98900 retry.go:31] will retry after 4.85568049s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (297.715812ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:16:53.971949  162984 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:16:53.972213  162984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:53.972222  162984 out.go:374] Setting ErrFile to fd 2...
	I1008 15:16:53.972226  162984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:16:53.972457  162984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:16:53.972636  162984 out.go:368] Setting JSON to false
	I1008 15:16:53.972681  162984 mustload.go:65] Loading cluster: ha-430216
	I1008 15:16:53.972747  162984 notify.go:220] Checking for updates...
	I1008 15:16:53.973086  162984 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:16:53.973107  162984 status.go:174] checking status of ha-430216 ...
	I1008 15:16:53.973621  162984 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:16:53.992578  162984 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:16:53.992605  162984 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:53.992881  162984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:16:54.010960  162984 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:16:54.011202  162984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:16:54.011272  162984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:16:54.029134  162984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:16:54.130033  162984 ssh_runner.go:195] Run: systemctl --version
	I1008 15:16:54.137430  162984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:16:54.150626  162984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:16:54.208484  162984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:16:54.198541538 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:16:54.208993  162984 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:16:54.209023  162984 api_server.go:166] Checking apiserver status ...
	I1008 15:16:54.209087  162984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:16:54.220308  162984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:16:54.220346  162984 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:16:54.220361  162984 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:16:54.225703   98900 retry.go:31] will retry after 8.500653911s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (298.85242ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:17:02.777745  163147 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:02.777894  163147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:02.777905  163147 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:02.777911  163147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:02.778126  163147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:02.778339  163147 out.go:368] Setting JSON to false
	I1008 15:17:02.778374  163147 mustload.go:65] Loading cluster: ha-430216
	I1008 15:17:02.778460  163147 notify.go:220] Checking for updates...
	I1008 15:17:02.778768  163147 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:02.778786  163147 status.go:174] checking status of ha-430216 ...
	I1008 15:17:02.779239  163147 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:02.796662  163147 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:17:02.796689  163147 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:02.796972  163147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:02.816128  163147 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:02.816454  163147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:02.816514  163147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:02.834701  163147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:02.938058  163147 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:02.944631  163147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:17:02.957407  163147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:03.014818  163147 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:17:03.004326899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:17:03.015320  163147 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:03.015352  163147 api_server.go:166] Checking apiserver status ...
	I1008 15:17:03.015397  163147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:17:03.026301  163147 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:03.026330  163147 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:17:03.026346  163147 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:17:03.031193   98900 retry.go:31] will retry after 16.580066079s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (302.464129ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:17:19.659251  163343 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:19.659528  163343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:19.659538  163343 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:19.659542  163343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:19.659748  163343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:19.659950  163343 out.go:368] Setting JSON to false
	I1008 15:17:19.659980  163343 mustload.go:65] Loading cluster: ha-430216
	I1008 15:17:19.660144  163343 notify.go:220] Checking for updates...
	I1008 15:17:19.660307  163343 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:19.660322  163343 status.go:174] checking status of ha-430216 ...
	I1008 15:17:19.661706  163343 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:19.681615  163343 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:17:19.681643  163343 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:19.681963  163343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:19.699775  163343 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:19.700069  163343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:19.700119  163343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:19.717400  163343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:19.820082  163343 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:19.826407  163343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:17:19.839672  163343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:19.900166  163343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:17:19.889155385 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:17:19.900639  163343 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:19.900672  163343 api_server.go:166] Checking apiserver status ...
	I1008 15:17:19.900707  163343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:17:19.911623  163343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:19.911657  163343 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:17:19.911683  163343 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 15:17:19.916683   98900 retry.go:31] will retry after 15.665020947s: exit status 6
E1008 15:17:26.902341   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 6 (300.003699ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:17:35.635272  163551 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:35.635538  163551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:35.635547  163551 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:35.635551  163551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:35.635775  163551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:35.636000  163551 out.go:368] Setting JSON to false
	I1008 15:17:35.636030  163551 mustload.go:65] Loading cluster: ha-430216
	I1008 15:17:35.636169  163551 notify.go:220] Checking for updates...
	I1008 15:17:35.636349  163551 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:35.636363  163551 status.go:174] checking status of ha-430216 ...
	I1008 15:17:35.636808  163551 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:35.657367  163551 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:17:35.657411  163551 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:35.657719  163551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:35.675561  163551 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:35.675902  163551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:35.675954  163551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:35.693977  163551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:35.795845  163551 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:35.802213  163551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:17:35.815087  163551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:35.874675  163551 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:17:35.863252354 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1008 15:17:35.875358  163551 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:35.875396  163551 api_server.go:166] Checking apiserver status ...
	I1008 15:17:35.875481  163551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:17:35.886173  163551 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:35.886199  163551 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:17:35.886215  163551 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (297.47935ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:17:36.191854  163675 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                                                  │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                                                 │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:17:26 ha-430216 crio[778]: time="2025-10-08T15:17:26.442317581Z" level=info msg="createCtr: removing container 857c9f79dacd4f5ac1b317afa907093614e2524e6af8a5c4adf48ec7b65f7b23" id=d4555ac5-bfb5-48da-9797-642dc71e040a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:26 ha-430216 crio[778]: time="2025-10-08T15:17:26.442374839Z" level=info msg="createCtr: deleting container 857c9f79dacd4f5ac1b317afa907093614e2524e6af8a5c4adf48ec7b65f7b23 from storage" id=d4555ac5-bfb5-48da-9797-642dc71e040a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:26 ha-430216 crio[778]: time="2025-10-08T15:17:26.44461635Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=d4555ac5-bfb5-48da-9797-642dc71e040a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.413341961Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c7338350-2084-4050-a9cf-1da9c506fd42 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.413388738Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3419a37a-de25-451f-8cfd-0bdc611df42f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.414280406Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6ff9d948-8fc5-45e2-8112-48f6a69d45f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.414282438Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=502bdef1-3cac-409d-8213-2f4fd2969f6a name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.415200125Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.415201744Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-430216/kube-apiserver" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.415394536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.415499029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.419680235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.420263772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.421126795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.421558485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.441814532Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.442605777Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443399283Z" level=info msg="createCtr: deleting container ID d23b14007487a0a7757fe844910676179ba1669054fc3cc50b9d7077ba67fd3e from idIndex" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443451672Z" level=info msg="createCtr: removing container d23b14007487a0a7757fe844910676179ba1669054fc3cc50b9d7077ba67fd3e" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443493226Z" level=info msg="createCtr: deleting container d23b14007487a0a7757fe844910676179ba1669054fc3cc50b9d7077ba67fd3e from storage" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443985832Z" level=info msg="createCtr: deleting container ID 9f3c58e5f59c9930b93ddcf874632a7dbffe366d674282a712d1b95700032b73 from idIndex" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.444020666Z" level=info msg="createCtr: removing container 9f3c58e5f59c9930b93ddcf874632a7dbffe366d674282a712d1b95700032b73" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.444048224Z" level=info msg="createCtr: deleting container 9f3c58e5f59c9930b93ddcf874632a7dbffe366d674282a712d1b95700032b73 from storage" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.446884661Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.447219094Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:17:36.773772    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:36.774351    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:36.775915    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:36.776386    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:36.777672    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:17:36 up  3:00,  0 user,  load average: 0.20, 0.12, 0.17
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:17:26 ha-430216 kubelet[1982]: E1008 15:17:26.671155    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:17:28 ha-430216 kubelet[1982]: E1008 15:17:28.059311    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:17:28 ha-430216 kubelet[1982]: I1008 15:17:28.240326    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:17:28 ha-430216 kubelet[1982]: E1008 15:17:28.240766    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.412924    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.413020    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.437963    1982 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.447223    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:17:30 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:30 ha-430216 kubelet[1982]:  > podSandboxID="41af8cf12376e9d30f8ae1968d47ed16c7dc1929f6b4bab8480c3eeb863d9025"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.447338    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:17:30 ha-430216 kubelet[1982]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:30 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.447381    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.447418    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:17:30 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:30 ha-430216 kubelet[1982]:  > podSandboxID="ef2159d8908409dc5cc1b61806acc0ee2a98b6813d1fce3744421ac822ba444b"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.447518    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:17:30 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:30 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.448650    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:17:35 ha-430216 kubelet[1982]: E1008 15:17:35.060153    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:17:35 ha-430216 kubelet[1982]: I1008 15:17:35.242732    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:17:35 ha-430216 kubelet[1982]: E1008 15:17:35.243222    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:17:36 ha-430216 kubelet[1982]: E1008 15:17:36.671729    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (322.230568ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:17:37.173523  163998 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-430216" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-430216" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:06:35.900413327Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce0da9acf5e713f529e75ee1bae754d1da02c23ae64cb8e7d475722e0b14179d",
	            "SandboxKey": "/var/run/docker/netns/ce0da9acf5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:fd:de:dd:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "93cbb313906c88a28161cbfcd96c96728c97c1ee52386973d55727cdea42aaed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 6 (308.893492ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:17:37.824183  164519 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-367186 image ls --format json --alsologtostderr                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls --format table --alsologtostderr                                                     │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ image   │ functional-367186 image ls                                                                                      │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:02 UTC │ 08 Oct 25 15:02 UTC │
	│ delete  │ -p functional-367186                                                                                            │ functional-367186 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │ 08 Oct 25 15:06 UTC │
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                                                  │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                                                 │ ha-430216         │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:06:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:06:30.591545  151549 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:30.591866  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.591878  151549 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:30.591882  151549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:30.592106  151549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:06:30.592683  151549 out.go:368] Setting JSON to false
	I1008 15:06:30.593743  151549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10142,"bootTime":1759925849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:30.593882  151549 start.go:141] virtualization: kvm guest
	I1008 15:06:30.596101  151549 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:30.597701  151549 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:30.597728  151549 notify.go:220] Checking for updates...
	I1008 15:06:30.600615  151549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:30.602201  151549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:06:30.603754  151549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:06:30.605224  151549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:30.606674  151549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:30.608221  151549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:30.634094  151549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:06:30.634249  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.693572  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.683647681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.693688  151549 docker.go:318] overlay module found
	I1008 15:06:30.696657  151549 out.go:179] * Using the docker driver based on user configuration
	I1008 15:06:30.698121  151549 start.go:305] selected driver: docker
	I1008 15:06:30.698143  151549 start.go:925] validating driver "docker" against <nil>
	I1008 15:06:30.698156  151549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:30.698905  151549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:06:30.758874  151549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:06:30.748800335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:06:30.759035  151549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:06:30.759254  151549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:06:30.761080  151549 out.go:179] * Using Docker driver with root privileges
	I1008 15:06:30.762385  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:30.762509  151549 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 15:06:30.762529  151549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:06:30.762624  151549 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1008 15:06:30.763980  151549 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:06:30.765219  151549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:06:30.766597  151549 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:06:30.767802  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:30.767855  151549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:06:30.767864  151549 cache.go:58] Caching tarball of preloaded images
	I1008 15:06:30.767893  151549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:06:30.767960  151549 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:06:30.767971  151549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:06:30.768286  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:30.768309  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json: {Name:mk017a0f7c93a30754e5e0bdbf8e78b988534a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:30.789104  151549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:06:30.789124  151549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:06:30.789142  151549 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:06:30.789172  151549 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:06:30.789293  151549 start.go:364] duration metric: took 99.523µs to acquireMachinesLock for "ha-430216"
	I1008 15:06:30.789321  151549 start.go:93] Provisioning new machine with config: &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:06:30.789398  151549 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:06:30.791704  151549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 15:06:30.791931  151549 start.go:159] libmachine.API.Create for "ha-430216" (driver="docker")
	I1008 15:06:30.791961  151549 client.go:168] LocalClient.Create starting
	I1008 15:06:30.792032  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:06:30.792069  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792082  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792130  151549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:06:30.792149  151549 main.go:141] libmachine: Decoding PEM data...
	I1008 15:06:30.792160  151549 main.go:141] libmachine: Parsing certificate...
	I1008 15:06:30.792529  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:06:30.809539  151549 cli_runner.go:211] docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:06:30.809634  151549 network_create.go:284] running [docker network inspect ha-430216] to gather additional debugging logs...
	I1008 15:06:30.809659  151549 cli_runner.go:164] Run: docker network inspect ha-430216
	W1008 15:06:30.826796  151549 cli_runner.go:211] docker network inspect ha-430216 returned with exit code 1
	I1008 15:06:30.826834  151549 network_create.go:287] error running [docker network inspect ha-430216]: docker network inspect ha-430216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-430216 not found
	I1008 15:06:30.826869  151549 network_create.go:289] output of [docker network inspect ha-430216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-430216 not found
	
	** /stderr **
	I1008 15:06:30.826981  151549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:30.844261  151549 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e34700}
	I1008 15:06:30.844304  151549 network_create.go:124] attempt to create docker network ha-430216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 15:06:30.844357  151549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-430216 ha-430216
	I1008 15:06:30.901968  151549 network_create.go:108] docker network ha-430216 192.168.49.0/24 created
	I1008 15:06:30.902008  151549 kic.go:121] calculated static IP "192.168.49.2" for the "ha-430216" container
	I1008 15:06:30.902084  151549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:06:30.919267  151549 cli_runner.go:164] Run: docker volume create ha-430216 --label name.minikube.sigs.k8s.io=ha-430216 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:06:30.940204  151549 oci.go:103] Successfully created a docker volume ha-430216
	I1008 15:06:30.940289  151549 cli_runner.go:164] Run: docker run --rm --name ha-430216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --entrypoint /usr/bin/test -v ha-430216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:06:31.343570  151549 oci.go:107] Successfully prepared a docker volume ha-430216
	I1008 15:06:31.343623  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:31.343650  151549 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:06:31.343713  151549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:06:35.789338  151549 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-430216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445521532s)
	I1008 15:06:35.789384  151549 kic.go:203] duration metric: took 4.44573048s to extract preloaded images to volume ...
	W1008 15:06:35.789560  151549 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:06:35.789599  151549 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:06:35.789651  151549 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:06:35.847581  151549 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-430216 --name ha-430216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-430216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-430216 --network ha-430216 --ip 192.168.49.2 --volume ha-430216:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:06:36.126627  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Running}}
	I1008 15:06:36.146085  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.164898  151549 cli_runner.go:164] Run: docker exec ha-430216 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:06:36.211585  151549 oci.go:144] the created container "ha-430216" has a running status.
	I1008 15:06:36.211626  151549 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa...
	I1008 15:06:36.340276  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 15:06:36.340328  151549 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:06:36.367270  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.389184  151549 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:06:36.389211  151549 kic_runner.go:114] Args: [docker exec --privileged ha-430216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:06:36.437942  151549 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:06:36.463571  151549 machine.go:93] provisionDockerMachine start ...
	I1008 15:06:36.463808  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.484043  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.484479  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.484504  151549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:06:36.638964  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.639005  151549 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:06:36.639079  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.658083  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.658337  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.658352  151549 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:06:36.817795  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:06:36.817909  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:36.837168  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:36.837414  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:36.837434  151549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:06:36.986695  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:06:36.986744  151549 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:06:36.986768  151549 ubuntu.go:190] setting up certificates
	I1008 15:06:36.986786  151549 provision.go:84] configureAuth start
	I1008 15:06:36.986886  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.003828  151549 provision.go:143] copyHostCerts
	I1008 15:06:37.003886  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.003917  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:06:37.003927  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:06:37.004001  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:06:37.004086  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004107  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:06:37.004114  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:06:37.004142  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:06:37.004194  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004210  151549 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:06:37.004216  151549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:06:37.004239  151549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:06:37.004292  151549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:06:37.105051  151549 provision.go:177] copyRemoteCerts
	I1008 15:06:37.105122  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:06:37.105158  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.122653  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.227405  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:06:37.227492  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:06:37.248679  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:06:37.248746  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1008 15:06:37.268142  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:06:37.268208  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:06:37.287374  151549 provision.go:87] duration metric: took 300.568522ms to configureAuth
	I1008 15:06:37.287405  151549 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:06:37.287639  151549 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:37.287768  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.305894  151549 main.go:141] libmachine: Using SSH client type: native
	I1008 15:06:37.306122  151549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1008 15:06:37.306137  151549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:06:37.569130  151549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:06:37.569160  151549 machine.go:96] duration metric: took 1.105487248s to provisionDockerMachine
	I1008 15:06:37.569171  151549 client.go:171] duration metric: took 6.777202227s to LocalClient.Create
	I1008 15:06:37.569194  151549 start.go:167] duration metric: took 6.777263839s to libmachine.API.Create "ha-430216"
	I1008 15:06:37.569205  151549 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:06:37.569219  151549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:06:37.569295  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:06:37.569344  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.587256  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.694153  151549 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:06:37.698216  151549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:06:37.698244  151549 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:06:37.698265  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:06:37.698322  151549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:06:37.698398  151549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:06:37.698408  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:06:37.698520  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:06:37.706624  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:37.727884  151549 start.go:296] duration metric: took 158.660667ms for postStartSetup
	I1008 15:06:37.728301  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.746846  151549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:06:37.747127  151549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:06:37.747171  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.764796  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.866990  151549 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:06:37.871820  151549 start.go:128] duration metric: took 7.082340068s to createHost
	I1008 15:06:37.871870  151549 start.go:83] releasing machines lock for "ha-430216", held for 7.082557184s
	I1008 15:06:37.871960  151549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:06:37.889228  151549 ssh_runner.go:195] Run: cat /version.json
	I1008 15:06:37.889281  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.889292  151549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:06:37.889346  151549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:06:37.907568  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:37.907935  151549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:06:38.006809  151549 ssh_runner.go:195] Run: systemctl --version
	I1008 15:06:38.063721  151549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:06:38.099315  151549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:06:38.104266  151549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:06:38.104348  151549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:06:38.131632  151549 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:06:38.131658  151549 start.go:495] detecting cgroup driver to use...
	I1008 15:06:38.131695  151549 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:06:38.131748  151549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:06:38.147899  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:06:38.160738  151549 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:06:38.160804  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:06:38.177797  151549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:06:38.196422  151549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:06:38.278643  151549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:06:38.366682  151549 docker.go:234] disabling docker service ...
	I1008 15:06:38.366759  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:06:38.385645  151549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:06:38.398370  151549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:06:38.483659  151549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:06:38.570317  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:06:38.583658  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:06:38.599061  151549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:06:38.599114  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.610376  151549 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:06:38.610456  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.620413  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.630354  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.640305  151549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:06:38.649181  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.658867  151549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.673573  151549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:06:38.683008  151549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:06:38.690758  151549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:06:38.698402  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:38.777527  151549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:06:38.884651  151549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:06:38.884733  151549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:06:38.888999  151549 start.go:563] Will wait 60s for crictl version
	I1008 15:06:38.889060  151549 ssh_runner.go:195] Run: which crictl
	I1008 15:06:38.892943  151549 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:06:38.917499  151549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:06:38.917589  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.947623  151549 ssh_runner.go:195] Run: crio --version
	I1008 15:06:38.978318  151549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:06:38.979602  151549 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:06:38.997833  151549 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:06:39.002645  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.014103  151549 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:06:39.014226  151549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:06:39.014276  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.046868  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.046892  151549 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:06:39.046937  151549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:06:39.073537  151549 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:06:39.073560  151549 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:06:39.073567  151549 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:06:39.073686  151549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:06:39.073749  151549 ssh_runner.go:195] Run: crio config
	I1008 15:06:39.123524  151549 cni.go:84] Creating CNI manager for ""
	I1008 15:06:39.123555  151549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:06:39.123577  151549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:06:39.123600  151549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:06:39.123734  151549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:06:39.123759  151549 kube-vip.go:115] generating kube-vip config ...
	I1008 15:06:39.123806  151549 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1008 15:06:39.137169  151549 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:06:39.137288  151549 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 15:06:39.137364  151549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:06:39.146505  151549 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:06:39.146570  151549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 15:06:39.154877  151549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:06:39.168344  151549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:06:39.185341  151549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:06:39.199704  151549 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1008 15:06:39.215311  151549 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1008 15:06:39.219372  151549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:06:39.230255  151549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:06:39.312200  151549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:06:39.338200  151549 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:06:39.338224  151549 certs.go:195] generating shared ca certs ...
	I1008 15:06:39.338248  151549 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.338409  151549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:06:39.338471  151549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:06:39.338487  151549 certs.go:257] generating profile certs ...
	I1008 15:06:39.338540  151549 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:06:39.338560  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt with IP's: []
	I1008 15:06:39.682453  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt ...
	I1008 15:06:39.682487  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt: {Name:mkbd00e0505a8395b90a1a08fd8eeeca25117b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682716  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key ...
	I1008 15:06:39.682736  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key: {Name:mk65605370c2f8a34fec92c7cdb030ff69077d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:39.682909  151549 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5
	I1008 15:06:39.682929  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1008 15:06:40.302940  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 ...
	I1008 15:06:40.302974  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5: {Name:mk403bbde28c9e144692befa9e2ab9b1562d32e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303180  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 ...
	I1008 15:06:40.303207  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5: {Name:mk08370e1c492cbe8d81805c93f8d6deb5de0ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.303317  151549 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:06:40.303458  151549 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.7c7940d5 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:06:40.303551  151549 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:06:40.303574  151549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt with IP's: []
	I1008 15:06:40.435299  151549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt ...
	I1008 15:06:40.435334  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt: {Name:mk6669e0b6e70ebfa909f08ac04b89373b82b45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435560  151549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key ...
	I1008 15:06:40.435586  151549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key: {Name:mk957cd6fe9b83c08273ab3250c60b6cf763e68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:06:40.435702  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:06:40.435724  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:06:40.435755  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:06:40.435776  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:06:40.435793  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:06:40.435814  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:06:40.435835  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:06:40.435849  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:06:40.435929  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:06:40.435978  151549 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:06:40.435993  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:06:40.436023  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:06:40.436052  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:06:40.436085  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:06:40.436140  151549 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:06:40.436176  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.436197  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.436218  151549 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.436805  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:06:40.456295  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:06:40.475463  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:06:40.494769  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:06:40.513727  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 15:06:40.533963  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:06:40.553389  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:06:40.572147  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:06:40.591127  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:06:40.612169  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:06:40.632029  151549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:06:40.651597  151549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:06:40.665619  151549 ssh_runner.go:195] Run: openssl version
	I1008 15:06:40.672331  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:06:40.682009  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686308  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.686373  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:06:40.720828  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:06:40.730392  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:06:40.740030  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744394  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.744491  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:06:40.778901  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:06:40.788813  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:06:40.798760  151549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802738  151549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.802792  151549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:06:40.838765  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:06:40.848037  151549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:06:40.852156  151549 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:06:40.852218  151549 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:06:40.852295  151549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:06:40.852350  151549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:06:40.880372  151549 cri.go:89] found id: ""
	I1008 15:06:40.880458  151549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:06:40.889024  151549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:06:40.897620  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:06:40.897673  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:06:40.906007  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:06:40.906027  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:06:40.906077  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:06:40.914419  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:06:40.914491  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:06:40.923059  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:06:40.931588  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:06:40.931653  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:06:40.939970  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.948243  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:06:40.948317  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:06:40.956519  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:06:40.965497  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:06:40.965581  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:06:40.973565  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:06:41.033604  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:06:41.092333  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:10:45.223937  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:10:45.224127  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:10:45.226678  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:45.226792  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:45.226993  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:45.227093  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:45.227157  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:45.227262  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:45.227364  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:45.227488  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:45.227537  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:45.227576  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:45.227615  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:45.227653  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:45.227692  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:45.227771  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:45.227869  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:45.227950  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:45.228036  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:45.230485  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:45.230592  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:45.230649  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:45.230743  151549 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:10:45.230800  151549 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:10:45.230856  151549 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:10:45.230909  151549 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:10:45.230966  151549 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:10:45.231059  151549 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231117  151549 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:10:45.231204  151549 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 15:10:45.231256  151549 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:10:45.231317  151549 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:10:45.231356  151549 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:10:45.231400  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:45.231460  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:45.231513  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:45.231557  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:45.231622  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:45.231671  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:45.231736  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:45.231789  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:45.233428  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:45.233538  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:45.233604  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:45.233695  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:45.233816  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:45.233901  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:45.233995  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:45.234096  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:45.234143  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:45.234245  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:45.234331  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:45.234382  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.985092ms
	I1008 15:10:45.234489  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:45.234598  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:45.234716  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:45.234818  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:10:45.234920  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	I1008 15:10:45.235019  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	I1008 15:10:45.235127  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	I1008 15:10:45.235136  151549 kubeadm.go:318] 
	I1008 15:10:45.235251  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:10:45.235356  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:10:45.235464  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:10:45.235565  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:10:45.235634  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:10:45.235696  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:10:45.235722  151549 kubeadm.go:318] 
	W1008 15:10:45.235884  151549 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-430216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.985092ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001084505s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001231945s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:10:45.235969  151549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:10:47.973633  151549 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.73763601s)
	I1008 15:10:47.973718  151549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:10:47.986829  151549 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:10:47.986902  151549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:10:47.995228  151549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:10:47.995247  151549 kubeadm.go:157] found existing configuration files:
	
	I1008 15:10:47.995306  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:10:48.003637  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:10:48.003700  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:10:48.011691  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:10:48.019741  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:10:48.019805  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:10:48.027893  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.035967  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:10:48.036030  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:10:48.043957  151549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:10:48.052163  151549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:10:48.052232  151549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:10:48.060553  151549 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:10:48.099558  151549 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:10:48.099671  151549 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:10:48.119823  151549 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:10:48.119884  151549 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:10:48.119963  151549 kubeadm.go:318] OS: Linux
	I1008 15:10:48.120043  151549 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:10:48.120131  151549 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:10:48.120202  151549 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:10:48.120263  151549 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:10:48.120334  151549 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:10:48.120419  151549 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:10:48.120495  151549 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:10:48.120565  151549 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:10:48.181077  151549 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:10:48.181239  151549 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:10:48.181382  151549 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:10:48.187698  151549 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:10:48.191717  151549 out.go:252]   - Generating certificates and keys ...
	I1008 15:10:48.191811  151549 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:10:48.191888  151549 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:10:48.191976  151549 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:10:48.192027  151549 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:10:48.192101  151549 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:10:48.192169  151549 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:10:48.192220  151549 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:10:48.192277  151549 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:10:48.192355  151549 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:10:48.192415  151549 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:10:48.192468  151549 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:10:48.192534  151549 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:10:48.476904  151549 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:10:48.673462  151549 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:10:48.838082  151549 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:10:49.057522  151549 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:10:49.597485  151549 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:10:49.597812  151549 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:10:49.599989  151549 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:10:49.602248  151549 out.go:252]   - Booting up control plane ...
	I1008 15:10:49.602341  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:10:49.602479  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:10:49.602554  151549 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:10:49.616512  151549 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:10:49.616680  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:10:49.623787  151549 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:10:49.624020  151549 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:10:49.624077  151549 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:10:49.732619  151549 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:10:49.732772  151549 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:10:50.733627  151549 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124628s
	I1008 15:10:50.736594  151549 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:10:50.736686  151549 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 15:10:50.736810  151549 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:10:50.736932  151549 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:14:50.737002  151549 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	I1008 15:14:50.737182  151549 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	I1008 15:14:50.737314  151549 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	I1008 15:14:50.737328  151549 kubeadm.go:318] 
	I1008 15:14:50.737430  151549 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:14:50.737571  151549 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:14:50.737681  151549 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:14:50.737825  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:14:50.737927  151549 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:14:50.738046  151549 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:14:50.738071  151549 kubeadm.go:318] 
	I1008 15:14:50.741378  151549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:14:50.741532  151549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:14:50.742055  151549 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:14:50.742124  151549 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:14:50.742220  151549 kubeadm.go:402] duration metric: took 8m9.890006529s to StartCluster
	I1008 15:14:50.742291  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:14:50.742350  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:14:50.770476  151549 cri.go:89] found id: ""
	I1008 15:14:50.770519  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.770528  151549 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:14:50.770535  151549 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:14:50.770583  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:14:50.798412  151549 cri.go:89] found id: ""
	I1008 15:14:50.798440  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.798464  151549 logs.go:284] No container was found matching "etcd"
	I1008 15:14:50.798473  151549 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:14:50.798529  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:14:50.825721  151549 cri.go:89] found id: ""
	I1008 15:14:50.825755  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.825767  151549 logs.go:284] No container was found matching "coredns"
	I1008 15:14:50.825775  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:14:50.825844  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:14:50.854545  151549 cri.go:89] found id: ""
	I1008 15:14:50.854580  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.854593  151549 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:14:50.854602  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:14:50.854667  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:14:50.882286  151549 cri.go:89] found id: ""
	I1008 15:14:50.882316  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.882328  151549 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:14:50.882336  151549 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:14:50.882391  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:14:50.909576  151549 cri.go:89] found id: ""
	I1008 15:14:50.909599  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.909607  151549 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:14:50.909613  151549 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:14:50.909662  151549 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:14:50.937595  151549 cri.go:89] found id: ""
	I1008 15:14:50.937619  151549 logs.go:282] 0 containers: []
	W1008 15:14:50.937631  151549 logs.go:284] No container was found matching "kindnet"
	I1008 15:14:50.937644  151549 logs.go:123] Gathering logs for kubelet ...
	I1008 15:14:50.937659  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:14:51.005015  151549 logs.go:123] Gathering logs for dmesg ...
	I1008 15:14:51.005062  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:14:51.020031  151549 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:14:51.020065  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:14:51.082050  151549 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:14:51.074340    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.075002    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.076713    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.077185    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:14:51.078796    2595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:14:51.082076  151549 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:14:51.082091  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:14:51.143656  151549 logs.go:123] Gathering logs for container status ...
	I1008 15:14:51.143701  151549 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:14:51.173767  151549 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:14:51.173874  151549 out.go:285] * 
	W1008 15:14:51.173981  151549 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.174003  151549 out.go:285] * 
	W1008 15:14:51.175779  151549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:14:51.179620  151549 out.go:203] 
	W1008 15:14:51.181038  151549 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001124628s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000044666s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000256465s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446691s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:14:51.181071  151549 out.go:285] * 
	I1008 15:14:51.183310  151549 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.419680235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.420263772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.421126795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.421558485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.441814532Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.442605777Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443399283Z" level=info msg="createCtr: deleting container ID d23b14007487a0a7757fe844910676179ba1669054fc3cc50b9d7077ba67fd3e from idIndex" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443451672Z" level=info msg="createCtr: removing container d23b14007487a0a7757fe844910676179ba1669054fc3cc50b9d7077ba67fd3e" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443493226Z" level=info msg="createCtr: deleting container d23b14007487a0a7757fe844910676179ba1669054fc3cc50b9d7077ba67fd3e from storage" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.443985832Z" level=info msg="createCtr: deleting container ID 9f3c58e5f59c9930b93ddcf874632a7dbffe366d674282a712d1b95700032b73 from idIndex" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.444020666Z" level=info msg="createCtr: removing container 9f3c58e5f59c9930b93ddcf874632a7dbffe366d674282a712d1b95700032b73" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.444048224Z" level=info msg="createCtr: deleting container 9f3c58e5f59c9930b93ddcf874632a7dbffe366d674282a712d1b95700032b73 from storage" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.446884661Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=ee221e24-115f-4b03-ad42-c74b96c4cb41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:30 ha-430216 crio[778]: time="2025-10-08T15:17:30.447219094Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=31b41712-63be-4bf2-a7da-16b79b9e7d76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.413216529Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f1b05286-534c-4f92-85f9-9bc8ce45e6b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.414220437Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5ac81649-e380-46cf-bd88-f25a8bf301c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.415202292Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=11fd4b37-1b3f-4834-9eef-f94212f4b802 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.415456439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.418727609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.419288014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.432632562Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11fd4b37-1b3f-4834-9eef-f94212f4b802 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.434173057Z" level=info msg="createCtr: deleting container ID f4c38c652a9350b9d6ea00f5a125a8618683b043c45b995352445b81b12966e1 from idIndex" id=11fd4b37-1b3f-4834-9eef-f94212f4b802 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.434224241Z" level=info msg="createCtr: removing container f4c38c652a9350b9d6ea00f5a125a8618683b043c45b995352445b81b12966e1" id=11fd4b37-1b3f-4834-9eef-f94212f4b802 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.434272537Z" level=info msg="createCtr: deleting container f4c38c652a9350b9d6ea00f5a125a8618683b043c45b995352445b81b12966e1 from storage" id=11fd4b37-1b3f-4834-9eef-f94212f4b802 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:17:37 ha-430216 crio[778]: time="2025-10-08T15:17:37.436589252Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=11fd4b37-1b3f-4834-9eef-f94212f4b802 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:17:38.465972    4817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:38.466490    4817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:38.468119    4817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:38.468637    4817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:17:38.470149    4817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:17:38 up  3:00,  0 user,  load average: 0.34, 0.15, 0.18
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:17:30 ha-430216 kubelet[1982]:  > podSandboxID="ef2159d8908409dc5cc1b61806acc0ee2a98b6813d1fce3744421ac822ba444b"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.447518    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:17:30 ha-430216 kubelet[1982]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:30 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:17:30 ha-430216 kubelet[1982]: E1008 15:17:30.448650    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:17:35 ha-430216 kubelet[1982]: E1008 15:17:35.060153    1982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:17:35 ha-430216 kubelet[1982]: I1008 15:17:35.242732    1982 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:17:35 ha-430216 kubelet[1982]: E1008 15:17:35.243222    1982 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:17:36 ha-430216 kubelet[1982]: E1008 15:17:36.671729    1982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8cb1f8ca32d1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,LastTimestamp:2025-10-08 15:10:50.406122193 +0000 UTC m=+0.673152243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:17:37 ha-430216 kubelet[1982]: E1008 15:17:37.412653    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:17:37 ha-430216 kubelet[1982]: E1008 15:17:37.436882    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:17:37 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:37 ha-430216 kubelet[1982]:  > podSandboxID="53772938dd72b0704ce7f5196ea9e84ad454215649feb01984fd33ff782177e3"
	Oct 08 15:17:37 ha-430216 kubelet[1982]: E1008 15:17:37.436983    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:17:37 ha-430216 kubelet[1982]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:37 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:17:37 ha-430216 kubelet[1982]: E1008 15:17:37.437014    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:17:38 ha-430216 kubelet[1982]: E1008 15:17:38.412510    1982 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:17:38 ha-430216 kubelet[1982]: E1008 15:17:38.441883    1982 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:17:38 ha-430216 kubelet[1982]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:38 ha-430216 kubelet[1982]:  > podSandboxID="4ba4ad1be062548d50f1a9af1501a0f07194e622a44b28a66545c5058d20f537"
	Oct 08 15:17:38 ha-430216 kubelet[1982]: E1008 15:17:38.441984    1982 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:17:38 ha-430216 kubelet[1982]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:17:38 ha-430216 kubelet[1982]:  > logger="UnhandledError"
	Oct 08 15:17:38 ha-430216 kubelet[1982]: E1008 15:17:38.442013    1982 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 6 (306.138493ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:17:38.856334  164955 status.go:458] kubeconfig endpoint: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-430216 stop --alsologtostderr -v 5: (1.224782582s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 start --wait true --alsologtostderr -v 5
E1008 15:22:26.903189   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 start --wait true --alsologtostderr -v 5: exit status 80 (6m8.020704511s)

                                                
                                                
-- stdout --
	* [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:17:40.199526  165314 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:40.199829  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.199841  165314 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:40.199845  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.200025  165314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:40.200506  165314 out.go:368] Setting JSON to false
	I1008 15:17:40.201472  165314 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10811,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:17:40.201578  165314 start.go:141] virtualization: kvm guest
	I1008 15:17:40.203913  165314 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:17:40.205535  165314 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:17:40.205583  165314 notify.go:220] Checking for updates...
	I1008 15:17:40.208565  165314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:17:40.210117  165314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:40.211622  165314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:17:40.213029  165314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:17:40.214476  165314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:17:40.216479  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:40.216629  165314 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:17:40.242539  165314 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:17:40.242667  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.304220  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.293786011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.304329  165314 docker.go:318] overlay module found
	I1008 15:17:40.306374  165314 out.go:179] * Using the docker driver based on existing profile
	I1008 15:17:40.307763  165314 start.go:305] selected driver: docker
	I1008 15:17:40.307785  165314 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:40.307880  165314 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:17:40.307983  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.364929  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.355521293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.365573  165314 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:17:40.365619  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:40.365678  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:40.365730  165314 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:17:40.367770  165314 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:17:40.369034  165314 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:17:40.370366  165314 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:17:40.371596  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:40.371635  165314 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:17:40.371651  165314 cache.go:58] Caching tarball of preloaded images
	I1008 15:17:40.371716  165314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:17:40.371748  165314 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:17:40.371756  165314 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:17:40.371872  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.392684  165314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:17:40.392707  165314 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:17:40.392735  165314 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:17:40.392762  165314 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:17:40.392820  165314 start.go:364] duration metric: took 40.317µs to acquireMachinesLock for "ha-430216"
	I1008 15:17:40.392840  165314 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:17:40.392844  165314 fix.go:54] fixHost starting: 
	I1008 15:17:40.393093  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.410344  165314 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:17:40.410395  165314 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:17:40.412417  165314 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:17:40.412507  165314 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:17:40.657462  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.676019  165314 kic.go:430] container "ha-430216" state is running.
	I1008 15:17:40.676351  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:40.696423  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.696761  165314 machine.go:93] provisionDockerMachine start ...
	I1008 15:17:40.696862  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:40.715440  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:40.715761  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:40.715779  165314 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:17:40.716557  165314 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36636->127.0.0.1:32788: read: connection reset by peer
	I1008 15:17:43.866807  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:43.866844  165314 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:17:43.866913  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:43.885755  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:43.886066  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:43.886085  165314 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:17:44.044811  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:44.044935  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.062657  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.062943  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.062962  165314 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:17:44.211403  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:17:44.211432  165314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:17:44.211462  165314 ubuntu.go:190] setting up certificates
	I1008 15:17:44.211481  165314 provision.go:84] configureAuth start
	I1008 15:17:44.211544  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:44.229072  165314 provision.go:143] copyHostCerts
	I1008 15:17:44.229109  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229137  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:17:44.229151  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229221  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:17:44.229317  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229336  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:17:44.229340  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229367  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:17:44.229432  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229473  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:17:44.229484  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229515  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:17:44.229587  165314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:17:44.499077  165314 provision.go:177] copyRemoteCerts
	I1008 15:17:44.499163  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:17:44.499212  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.516869  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:44.621363  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:17:44.621431  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:17:44.640000  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:17:44.640066  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:17:44.658584  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:17:44.658662  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 15:17:44.677694  165314 provision.go:87] duration metric: took 466.198036ms to configureAuth
	I1008 15:17:44.677721  165314 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:17:44.677906  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:44.678018  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.696317  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.696574  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.696594  165314 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:17:44.957182  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:17:44.957211  165314 machine.go:96] duration metric: took 4.260426846s to provisionDockerMachine
	I1008 15:17:44.957226  165314 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:17:44.957238  165314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:17:44.957296  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:17:44.957347  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.975366  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.079375  165314 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:17:45.083426  165314 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:17:45.083475  165314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:17:45.083489  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:17:45.083555  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:17:45.083654  165314 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:17:45.083668  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:17:45.083797  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:17:45.092110  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:45.110634  165314 start.go:296] duration metric: took 153.392527ms for postStartSetup
	I1008 15:17:45.110712  165314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:45.110746  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.128609  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.229014  165314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:17:45.233764  165314 fix.go:56] duration metric: took 4.840910167s for fixHost
	I1008 15:17:45.233790  165314 start.go:83] releasing machines lock for "ha-430216", held for 4.840957644s
	I1008 15:17:45.233848  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:45.251189  165314 ssh_runner.go:195] Run: cat /version.json
	I1008 15:17:45.251208  165314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:17:45.251250  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.251265  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.269790  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.270642  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.424404  165314 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:45.431092  165314 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:17:45.467246  165314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:17:45.472156  165314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:17:45.472216  165314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:17:45.480408  165314 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:17:45.480432  165314 start.go:495] detecting cgroup driver to use...
	I1008 15:17:45.480483  165314 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:17:45.480532  165314 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:17:45.494905  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:17:45.507311  165314 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:17:45.507372  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:17:45.522294  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:17:45.535383  165314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:17:45.613394  165314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:17:45.698519  165314 docker.go:234] disabling docker service ...
	I1008 15:17:45.698592  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:17:45.712972  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:17:45.725410  165314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:17:45.808999  165314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:17:45.890393  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:17:45.903437  165314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:17:45.918341  165314 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:17:45.918398  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.928311  165314 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:17:45.928386  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.938723  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.948562  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.958637  165314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:17:45.967780  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.977284  165314 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.986240  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.995533  165314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:17:46.003222  165314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:17:46.011206  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.088962  165314 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:17:46.194350  165314 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:17:46.194427  165314 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:17:46.198496  165314 start.go:563] Will wait 60s for crictl version
	I1008 15:17:46.198558  165314 ssh_runner.go:195] Run: which crictl
	I1008 15:17:46.202386  165314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:17:46.228548  165314 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:17:46.228621  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.256833  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.288593  165314 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:17:46.289934  165314 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:17:46.307676  165314 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:17:46.312234  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.324342  165314 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:17:46.324511  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:46.324585  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.355836  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.355859  165314 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:17:46.355919  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.382577  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.382601  165314 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:17:46.382609  165314 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:17:46.382723  165314 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:17:46.382802  165314 ssh_runner.go:195] Run: crio config
	I1008 15:17:46.428099  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:46.428124  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:46.428145  165314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:17:46.428173  165314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:17:46.428324  165314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:17:46.428406  165314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:17:46.436958  165314 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:17:46.437025  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:17:46.445838  165314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:17:46.458878  165314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:17:46.472075  165314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:17:46.485552  165314 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:17:46.489640  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.500389  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.578574  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:46.604149  165314 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:17:46.604181  165314 certs.go:195] generating shared ca certs ...
	I1008 15:17:46.604215  165314 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:46.604428  165314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:17:46.604510  165314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:17:46.604529  165314 certs.go:257] generating profile certs ...
	I1008 15:17:46.604662  165314 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:17:46.604697  165314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:17:46.604728  165314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 15:17:47.358821  165314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 ...
	I1008 15:17:47.358862  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92: {Name:mk5db33d068b68a4018c945a3cf387814181d041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359079  165314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 ...
	I1008 15:17:47.359099  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92: {Name:mk225894b1a1cad5b94eea81035f94b5877a9e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359220  165314 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:17:47.359416  165314 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:17:47.359616  165314 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:17:47.359637  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:17:47.359656  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:17:47.359681  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:17:47.359700  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:17:47.359717  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:17:47.359737  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:17:47.359753  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:17:47.359771  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:17:47.359839  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:17:47.359889  165314 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:17:47.359903  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:17:47.359938  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:17:47.359970  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:17:47.360003  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:17:47.360060  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:47.360098  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.360118  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.360137  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.360687  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:17:47.379230  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:17:47.397207  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:17:47.416101  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:17:47.434846  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:17:47.452646  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:17:47.471016  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:17:47.488663  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:17:47.506734  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:17:47.524704  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:17:47.542944  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:17:47.560522  165314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:17:47.573301  165314 ssh_runner.go:195] Run: openssl version
	I1008 15:17:47.579415  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:17:47.588940  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592844  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592911  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.627509  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:17:47.636174  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:17:47.645428  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649598  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649651  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.685089  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:17:47.693825  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:17:47.704633  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.709997  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.710062  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.752413  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:17:47.761256  165314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:17:47.765490  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:17:47.800513  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:17:47.834950  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:17:47.869178  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:17:47.904985  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:17:47.940157  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:17:47.975301  165314 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:47.975398  165314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:17:47.975497  165314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:17:48.003286  165314 cri.go:89] found id: ""
	I1008 15:17:48.003362  165314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:17:48.011838  165314 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:17:48.011861  165314 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:17:48.011915  165314 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:17:48.019689  165314 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:48.020188  165314 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.020357  165314 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:17:48.020889  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.021422  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.021950  165314 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:17:48.021991  165314 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:17:48.022003  165314 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:17:48.022008  165314 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:17:48.022011  165314 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:17:48.022010  165314 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:17:48.022367  165314 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:17:48.030350  165314 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:17:48.030384  165314 kubeadm.go:601] duration metric: took 18.515806ms to restartPrimaryControlPlane
	I1008 15:17:48.030391  165314 kubeadm.go:402] duration metric: took 55.10386ms to StartCluster
	I1008 15:17:48.030407  165314 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.030479  165314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.031062  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.031320  165314 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:17:48.031417  165314 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:17:48.031543  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:48.031550  165314 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:17:48.031579  165314 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:17:48.031542  165314 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:17:48.031697  165314 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:17:48.031741  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.031859  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.032220  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.035473  165314 out.go:179] * Verifying Kubernetes components...
	I1008 15:17:48.036689  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:48.051620  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.051985  165314 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:17:48.052033  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.052536  165314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:17:48.052546  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.053963  165314 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.053984  165314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:17:48.054040  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.079192  165314 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:17:48.079217  165314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:17:48.079284  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.080326  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.101694  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.146073  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:48.160230  165314 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:17:48.193165  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.212223  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.250131  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.250182  165314 retry.go:31] will retry after 167.936984ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:48.267393  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.267422  165314 retry.go:31] will retry after 358.217903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.418711  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.475364  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.475407  165314 retry.go:31] will retry after 446.950012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.626729  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.682981  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.683011  165314 retry.go:31] will retry after 531.317527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.923438  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.977935  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.977972  165314 retry.go:31] will retry after 650.888904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.214916  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:49.268803  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.268853  165314 retry.go:31] will retry after 736.958634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.629397  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:49.684331  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.684369  165314 retry.go:31] will retry after 676.827705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.006882  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.061009  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.061039  165314 retry.go:31] will retry after 545.238805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:50.161718  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:50.362321  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:50.417195  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.417229  165314 retry.go:31] will retry after 1.567260249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.606477  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.661410  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.661455  165314 retry.go:31] will retry after 1.443051142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:51.985236  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:52.040164  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.040193  165314 retry.go:31] will retry after 2.313802653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.105463  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:52.160492  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.160528  165314 retry.go:31] will retry after 1.660110088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:52.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:53.821608  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:53.876616  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:53.876662  165314 retry.go:31] will retry after 3.622883186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.354878  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:54.409389  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.409423  165314 retry.go:31] will retry after 4.24595112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:54.661241  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:17:57.161093  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:57.500619  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:57.554551  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:57.554585  165314 retry.go:31] will retry after 5.598675775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.656416  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:58.714339  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.714378  165314 retry.go:31] will retry after 5.615284906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:59.161298  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:01.161635  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:03.153501  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:03.207250  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:03.207281  165314 retry.go:31] will retry after 5.699792472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:03.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:04.330762  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:04.388974  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:04.389006  165314 retry.go:31] will retry after 4.649889332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:06.161419  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:08.661313  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:08.907702  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:08.963026  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:08.963063  165314 retry.go:31] will retry after 13.849348803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.039214  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:09.093068  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.093106  165314 retry.go:31] will retry after 13.971081611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:11.160802  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:13.161178  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:15.661334  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:18.161260  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:20.661258  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:22.813360  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:22.868201  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:22.868242  165314 retry.go:31] will retry after 18.044250351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.064572  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:23.119242  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.119274  165314 retry.go:31] will retry after 13.659632674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:23.160839  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:25.161596  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:27.661763  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:30.160996  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:32.161239  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:34.161289  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:36.161816  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:36.780076  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:36.835066  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:36.835125  165314 retry.go:31] will retry after 24.301634838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:38.661408  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:40.661719  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:40.913117  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:40.970652  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:40.970689  165314 retry.go:31] will retry after 29.623667492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:43.161261  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:45.661475  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:48.161429  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:50.661022  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:52.661560  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:55.161328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:57.661202  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:00.160922  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:01.137748  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:01.192234  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:01.192272  165314 retry.go:31] will retry after 46.151732803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:02.161479  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:04.661301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:07.161112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:09.161773  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:10.595151  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:10.649317  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:10.649348  165314 retry.go:31] will retry after 34.509482074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:11.661098  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:13.661164  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:16.160980  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:18.161117  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:20.661118  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:23.161018  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:25.661094  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:28.160962  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:30.660939  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:33.160970  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:35.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:38.161485  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:40.161694  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:42.660977  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:44.661727  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:45.159116  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:45.213330  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:45.213473  165314 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:19:47.161077  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:47.344346  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:47.398881  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:47.398996  165314 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:19:47.402318  165314 out.go:179] * Enabled addons: 
	I1008 15:19:47.403650  165314 addons.go:514] duration metric: took 1m59.372237661s for enable addons: enabled=[]
	W1008 15:19:49.161145  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:51.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:53.661015  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:56.161063  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:58.161490  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:00.161646  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:02.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:05.160822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:07.160868  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:09.660834  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:11.660990  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:14.160903  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:16.161003  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:18.161359  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:20.661641  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:22.661701  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:24.661824  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:27.160924  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:29.660942  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:31.661112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:34.160994  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:36.161213  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:38.661173  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:41.160956  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:43.660831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:45.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:48.161052  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:50.661054  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:53.160840  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:55.660921  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:58.160916  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:00.161810  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:02.660851  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:04.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:07.160828  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:09.161076  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:11.661048  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:14.160838  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:16.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:18.161296  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:20.661517  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:23.161552  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:25.161729  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:27.660935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:29.661209  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:32.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:34.660964  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:36.661288  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:38.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:41.160885  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:43.660822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:45.661636  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:47.661793  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:50.160858  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:52.160906  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:54.660820  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:56.660941  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:59.160837  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:01.160968  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:03.660914  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:05.661127  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:08.161235  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:10.661328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:13.160998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:15.661224  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:18.161405  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:20.661507  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:22.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:25.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:27.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:29.161301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:31.161696  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:33.161814  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:35.661109  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:38.161505  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:40.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:43.160905  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:45.161935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:47.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:49.661627  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:52.160997  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:54.660969  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:56.661103  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:58.661532  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:01.160960  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:03.161002  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:05.161043  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:07.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:09.161741  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:11.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:13.661045  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:15.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:17.661374  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:20.160831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:22.161040  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:24.660973  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:26.661132  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:28.661354  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:30.661576  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:32.661875  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:35.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:37.660961  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:39.661150  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:42.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:44.660882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:46.661243  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:23:48.160817  165314 node_ready.go:38] duration metric: took 6m0.000537691s for node "ha-430216" to be "Ready" ...
	I1008 15:23:48.163540  165314 out.go:203] 
	W1008 15:23:48.165619  165314 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:23:48.165638  165314 out.go:285] * 
	* 
	W1008 15:23:48.167470  165314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:23:48.169005  165314 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-430216 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165516,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:17:40.438320544Z",
	            "FinishedAt": "2025-10-08T15:17:39.294925999Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a859197b65a38409f07e70f3c98c669d775d8929557c4da2e83a4d313514263a",
	            "SandboxKey": "/var/run/docker/netns/a859197b65a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:01:a9:ea:cf:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "e35ced1eecbb689c7a373ec6a83d63c8613f8d6b045af1de4211947cfde7a915",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (302.252511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ ha-430216 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:06 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                                                          │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                                       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                                                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                                                 │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                                      │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                                                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                                      │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:17:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:17:40.199526  165314 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:40.199829  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.199841  165314 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:40.199845  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.200025  165314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:40.200506  165314 out.go:368] Setting JSON to false
	I1008 15:17:40.201472  165314 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10811,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:17:40.201578  165314 start.go:141] virtualization: kvm guest
	I1008 15:17:40.203913  165314 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:17:40.205535  165314 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:17:40.205583  165314 notify.go:220] Checking for updates...
	I1008 15:17:40.208565  165314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:17:40.210117  165314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:40.211622  165314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:17:40.213029  165314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:17:40.214476  165314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:17:40.216479  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:40.216629  165314 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:17:40.242539  165314 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:17:40.242667  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.304220  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.293786011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.304329  165314 docker.go:318] overlay module found
	I1008 15:17:40.306374  165314 out.go:179] * Using the docker driver based on existing profile
	I1008 15:17:40.307763  165314 start.go:305] selected driver: docker
	I1008 15:17:40.307785  165314 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:40.307880  165314 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:17:40.307983  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.364929  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.355521293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.365573  165314 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:17:40.365619  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:40.365678  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:40.365730  165314 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:17:40.367770  165314 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:17:40.369034  165314 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:17:40.370366  165314 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:17:40.371596  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:40.371635  165314 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:17:40.371651  165314 cache.go:58] Caching tarball of preloaded images
	I1008 15:17:40.371716  165314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:17:40.371748  165314 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:17:40.371756  165314 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:17:40.371872  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.392684  165314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:17:40.392707  165314 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:17:40.392735  165314 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:17:40.392762  165314 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:17:40.392820  165314 start.go:364] duration metric: took 40.317µs to acquireMachinesLock for "ha-430216"
	I1008 15:17:40.392840  165314 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:17:40.392844  165314 fix.go:54] fixHost starting: 
	I1008 15:17:40.393093  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.410344  165314 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:17:40.410395  165314 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:17:40.412417  165314 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:17:40.412507  165314 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:17:40.657462  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.676019  165314 kic.go:430] container "ha-430216" state is running.
	I1008 15:17:40.676351  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:40.696423  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.696761  165314 machine.go:93] provisionDockerMachine start ...
	I1008 15:17:40.696862  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:40.715440  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:40.715761  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:40.715779  165314 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:17:40.716557  165314 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36636->127.0.0.1:32788: read: connection reset by peer
	I1008 15:17:43.866807  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:43.866844  165314 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:17:43.866913  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:43.885755  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:43.886066  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:43.886085  165314 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:17:44.044811  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:44.044935  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.062657  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.062943  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.062962  165314 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:17:44.211403  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:17:44.211432  165314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:17:44.211462  165314 ubuntu.go:190] setting up certificates
	I1008 15:17:44.211481  165314 provision.go:84] configureAuth start
	I1008 15:17:44.211544  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:44.229072  165314 provision.go:143] copyHostCerts
	I1008 15:17:44.229109  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229137  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:17:44.229151  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229221  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:17:44.229317  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229336  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:17:44.229340  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229367  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:17:44.229432  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229473  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:17:44.229484  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229515  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:17:44.229587  165314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:17:44.499077  165314 provision.go:177] copyRemoteCerts
	I1008 15:17:44.499163  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:17:44.499212  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.516869  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:44.621363  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:17:44.621431  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:17:44.640000  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:17:44.640066  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:17:44.658584  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:17:44.658662  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 15:17:44.677694  165314 provision.go:87] duration metric: took 466.198036ms to configureAuth
	I1008 15:17:44.677721  165314 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:17:44.677906  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:44.678018  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.696317  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.696574  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.696594  165314 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:17:44.957182  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:17:44.957211  165314 machine.go:96] duration metric: took 4.260426846s to provisionDockerMachine
	I1008 15:17:44.957226  165314 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:17:44.957238  165314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:17:44.957296  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:17:44.957347  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.975366  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.079375  165314 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:17:45.083426  165314 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:17:45.083475  165314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:17:45.083489  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:17:45.083555  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:17:45.083654  165314 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:17:45.083668  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:17:45.083797  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:17:45.092110  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:45.110634  165314 start.go:296] duration metric: took 153.392527ms for postStartSetup
	I1008 15:17:45.110712  165314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:45.110746  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.128609  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.229014  165314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:17:45.233764  165314 fix.go:56] duration metric: took 4.840910167s for fixHost
	I1008 15:17:45.233790  165314 start.go:83] releasing machines lock for "ha-430216", held for 4.840957644s
	I1008 15:17:45.233848  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:45.251189  165314 ssh_runner.go:195] Run: cat /version.json
	I1008 15:17:45.251208  165314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:17:45.251250  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.251265  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.269790  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.270642  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.424404  165314 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:45.431092  165314 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:17:45.467246  165314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:17:45.472156  165314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:17:45.472216  165314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:17:45.480408  165314 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:17:45.480432  165314 start.go:495] detecting cgroup driver to use...
	I1008 15:17:45.480483  165314 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:17:45.480532  165314 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:17:45.494905  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:17:45.507311  165314 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:17:45.507372  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:17:45.522294  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:17:45.535383  165314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:17:45.613394  165314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:17:45.698519  165314 docker.go:234] disabling docker service ...
	I1008 15:17:45.698592  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:17:45.712972  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:17:45.725410  165314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:17:45.808999  165314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:17:45.890393  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:17:45.903437  165314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:17:45.918341  165314 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:17:45.918398  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.928311  165314 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:17:45.928386  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.938723  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.948562  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.958637  165314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:17:45.967780  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.977284  165314 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.986240  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.995533  165314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:17:46.003222  165314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:17:46.011206  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.088962  165314 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:17:46.194350  165314 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:17:46.194427  165314 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:17:46.198496  165314 start.go:563] Will wait 60s for crictl version
	I1008 15:17:46.198558  165314 ssh_runner.go:195] Run: which crictl
	I1008 15:17:46.202386  165314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:17:46.228548  165314 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:17:46.228621  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.256833  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.288593  165314 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:17:46.289934  165314 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:17:46.307676  165314 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:17:46.312234  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.324342  165314 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:17:46.324511  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:46.324585  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.355836  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.355859  165314 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:17:46.355919  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.382577  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.382601  165314 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:17:46.382609  165314 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:17:46.382723  165314 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:17:46.382802  165314 ssh_runner.go:195] Run: crio config
	I1008 15:17:46.428099  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:46.428124  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:46.428145  165314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:17:46.428173  165314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:17:46.428324  165314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:17:46.428406  165314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:17:46.436958  165314 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:17:46.437025  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:17:46.445838  165314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:17:46.458878  165314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:17:46.472075  165314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:17:46.485552  165314 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:17:46.489640  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.500389  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.578574  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:46.604149  165314 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:17:46.604181  165314 certs.go:195] generating shared ca certs ...
	I1008 15:17:46.604215  165314 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:46.604428  165314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:17:46.604510  165314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:17:46.604529  165314 certs.go:257] generating profile certs ...
	I1008 15:17:46.604662  165314 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:17:46.604697  165314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:17:46.604728  165314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 15:17:47.358821  165314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 ...
	I1008 15:17:47.358862  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92: {Name:mk5db33d068b68a4018c945a3cf387814181d041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359079  165314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 ...
	I1008 15:17:47.359099  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92: {Name:mk225894b1a1cad5b94eea81035f94b5877a9e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359220  165314 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:17:47.359416  165314 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:17:47.359616  165314 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:17:47.359637  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:17:47.359656  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:17:47.359681  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:17:47.359700  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:17:47.359717  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:17:47.359737  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:17:47.359753  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:17:47.359771  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:17:47.359839  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:17:47.359889  165314 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:17:47.359903  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:17:47.359938  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:17:47.359970  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:17:47.360003  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:17:47.360060  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:47.360098  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.360118  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.360137  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.360687  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:17:47.379230  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:17:47.397207  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:17:47.416101  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:17:47.434846  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:17:47.452646  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:17:47.471016  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:17:47.488663  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:17:47.506734  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:17:47.524704  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:17:47.542944  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:17:47.560522  165314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:17:47.573301  165314 ssh_runner.go:195] Run: openssl version
	I1008 15:17:47.579415  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:17:47.588940  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592844  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592911  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.627509  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:17:47.636174  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:17:47.645428  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649598  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649651  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.685089  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:17:47.693825  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:17:47.704633  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.709997  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.710062  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.752413  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:17:47.761256  165314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:17:47.765490  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:17:47.800513  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:17:47.834950  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:17:47.869178  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:17:47.904985  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:17:47.940157  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:17:47.975301  165314 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:47.975398  165314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:17:47.975497  165314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:17:48.003286  165314 cri.go:89] found id: ""
	I1008 15:17:48.003362  165314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:17:48.011838  165314 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:17:48.011861  165314 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:17:48.011915  165314 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:17:48.019689  165314 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:48.020188  165314 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.020357  165314 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:17:48.020889  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.021422  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.021950  165314 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:17:48.021991  165314 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:17:48.022003  165314 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:17:48.022008  165314 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:17:48.022011  165314 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:17:48.022010  165314 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:17:48.022367  165314 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:17:48.030350  165314 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:17:48.030384  165314 kubeadm.go:601] duration metric: took 18.515806ms to restartPrimaryControlPlane
	I1008 15:17:48.030391  165314 kubeadm.go:402] duration metric: took 55.10386ms to StartCluster
	I1008 15:17:48.030407  165314 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.030479  165314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.031062  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.031320  165314 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:17:48.031417  165314 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:17:48.031543  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:48.031550  165314 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:17:48.031579  165314 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:17:48.031542  165314 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:17:48.031697  165314 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:17:48.031741  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.031859  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.032220  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.035473  165314 out.go:179] * Verifying Kubernetes components...
	I1008 15:17:48.036689  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:48.051620  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.051985  165314 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:17:48.052033  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.052536  165314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:17:48.052546  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.053963  165314 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.053984  165314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:17:48.054040  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.079192  165314 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:17:48.079217  165314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:17:48.079284  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.080326  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.101694  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.146073  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:48.160230  165314 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:17:48.193165  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.212223  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.250131  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.250182  165314 retry.go:31] will retry after 167.936984ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:48.267393  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.267422  165314 retry.go:31] will retry after 358.217903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.418711  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.475364  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.475407  165314 retry.go:31] will retry after 446.950012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.626729  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.682981  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.683011  165314 retry.go:31] will retry after 531.317527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.923438  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.977935  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.977972  165314 retry.go:31] will retry after 650.888904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.214916  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:49.268803  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.268853  165314 retry.go:31] will retry after 736.958634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.629397  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:49.684331  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.684369  165314 retry.go:31] will retry after 676.827705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.006882  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.061009  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.061039  165314 retry.go:31] will retry after 545.238805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:50.161718  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:50.362321  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:50.417195  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.417229  165314 retry.go:31] will retry after 1.567260249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.606477  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.661410  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.661455  165314 retry.go:31] will retry after 1.443051142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:51.985236  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:52.040164  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.040193  165314 retry.go:31] will retry after 2.313802653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.105463  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:52.160492  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.160528  165314 retry.go:31] will retry after 1.660110088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:52.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:53.821608  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:53.876616  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:53.876662  165314 retry.go:31] will retry after 3.622883186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.354878  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:54.409389  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.409423  165314 retry.go:31] will retry after 4.24595112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:54.661241  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:17:57.161093  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:57.500619  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:57.554551  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:57.554585  165314 retry.go:31] will retry after 5.598675775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.656416  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:58.714339  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.714378  165314 retry.go:31] will retry after 5.615284906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:59.161298  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:01.161635  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:03.153501  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:03.207250  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:03.207281  165314 retry.go:31] will retry after 5.699792472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:03.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:04.330762  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:04.388974  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:04.389006  165314 retry.go:31] will retry after 4.649889332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:06.161419  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:08.661313  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:08.907702  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:08.963026  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:08.963063  165314 retry.go:31] will retry after 13.849348803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.039214  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:09.093068  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.093106  165314 retry.go:31] will retry after 13.971081611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:11.160802  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:13.161178  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:15.661334  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:18.161260  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:20.661258  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:22.813360  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:22.868201  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:22.868242  165314 retry.go:31] will retry after 18.044250351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.064572  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:23.119242  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.119274  165314 retry.go:31] will retry after 13.659632674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:23.160839  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:25.161596  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:27.661763  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:30.160996  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:32.161239  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:34.161289  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:36.161816  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:36.780076  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:36.835066  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:36.835125  165314 retry.go:31] will retry after 24.301634838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:38.661408  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:40.661719  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:40.913117  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:40.970652  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:40.970689  165314 retry.go:31] will retry after 29.623667492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:43.161261  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:45.661475  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:48.161429  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:50.661022  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:52.661560  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:55.161328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:57.661202  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:00.160922  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:01.137748  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:01.192234  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:01.192272  165314 retry.go:31] will retry after 46.151732803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:02.161479  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:04.661301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:07.161112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:09.161773  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:10.595151  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:10.649317  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:10.649348  165314 retry.go:31] will retry after 34.509482074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:11.661098  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:13.661164  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:16.160980  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:18.161117  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:20.661118  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:23.161018  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:25.661094  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:28.160962  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:30.660939  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:33.160970  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:35.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:38.161485  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:40.161694  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:42.660977  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:44.661727  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:45.159116  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:45.213330  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:45.213473  165314 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:19:47.161077  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:47.344346  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:47.398881  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:47.398996  165314 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:19:47.402318  165314 out.go:179] * Enabled addons: 
	I1008 15:19:47.403650  165314 addons.go:514] duration metric: took 1m59.372237661s for enable addons: enabled=[]
	W1008 15:19:49.161145  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:51.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:53.661015  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:56.161063  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:58.161490  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:00.161646  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:02.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:05.160822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:07.160868  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:09.660834  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:11.660990  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:14.160903  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:16.161003  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:18.161359  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:20.661641  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:22.661701  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:24.661824  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:27.160924  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:29.660942  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:31.661112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:34.160994  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:36.161213  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:38.661173  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:41.160956  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:43.660831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:45.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:48.161052  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:50.661054  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:53.160840  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:55.660921  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:58.160916  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:00.161810  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:02.660851  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:04.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:07.160828  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:09.161076  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:11.661048  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:14.160838  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:16.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:18.161296  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:20.661517  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:23.161552  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:25.161729  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:27.660935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:29.661209  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:32.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:34.660964  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:36.661288  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:38.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:41.160885  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:43.660822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:45.661636  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:47.661793  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:50.160858  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:52.160906  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:54.660820  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:56.660941  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:59.160837  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:01.160968  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:03.660914  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:05.661127  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:08.161235  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:10.661328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:13.160998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:15.661224  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:18.161405  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:20.661507  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:22.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:25.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:27.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:29.161301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:31.161696  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:33.161814  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:35.661109  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:38.161505  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:40.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:43.160905  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:45.161935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:47.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:49.661627  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:52.160997  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:54.660969  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:56.661103  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:58.661532  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:01.160960  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:03.161002  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:05.161043  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:07.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:09.161741  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:11.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:13.661045  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:15.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:17.661374  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:20.160831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:22.161040  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:24.660973  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:26.661132  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:28.661354  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:30.661576  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:32.661875  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:35.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:37.660961  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:39.661150  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:42.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:44.660882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:46.661243  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:23:48.160817  165314 node_ready.go:38] duration metric: took 6m0.000537691s for node "ha-430216" to be "Ready" ...
	I1008 15:23:48.163540  165314 out.go:203] 
	W1008 15:23:48.165619  165314 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:23:48.165638  165314 out.go:285] * 
	W1008 15:23:48.167470  165314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:23:48.169005  165314 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.723360467Z" level=info msg="createCtr: removing container 3d25e29407b0203d9a9c6947fdf94fd8c9ff9ed7de139f14161eb81832fc2662" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.723396505Z" level=info msg="createCtr: deleting container 3d25e29407b0203d9a9c6947fdf94fd8c9ff9ed7de139f14161eb81832fc2662 from storage" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.725857077Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.699756741Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a3da441f-a3c0-455a-901b-e7775d5ce11f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.700761676Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b04006d-4701-4e1e-adec-c15f8af9118f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.701717413Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.702118129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.706697114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.707282936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.723578924Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.724988347Z" level=info msg="createCtr: deleting container ID dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8 from idIndex" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.725028065Z" level=info msg="createCtr: removing container dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.725061393Z" level=info msg="createCtr: deleting container dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8 from storage" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.727163909Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.699561286Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=501d827b-6905-409a-8de5-af070b4d21e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.700580265Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=60ff1316-fdf3-4f8b-b900-2998c6dab5c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.701518645Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.701736879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.70492584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.705521913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.721364493Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.722696209Z" level=info msg="createCtr: deleting container ID 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8 from idIndex" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.72272718Z" level=info msg="createCtr: removing container 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.72275607Z" level=info msg="createCtr: deleting container 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8 from storage" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.724994672Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:23:49.164321    2003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:49.164844    2003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:49.166469    2003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:49.167062    2003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:49.168583    2003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:23:49 up  3:06,  0 user,  load average: 0.00, 0.04, 0.11
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:23:40 ha-430216 kubelet[670]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:40 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:40 ha-430216 kubelet[670]: E1008 15:23:40.726322     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.337933     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:23:42 ha-430216 kubelet[670]: I1008 15:23:42.511486     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.511906     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.554027     670 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.699290     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727486     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:23:42 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:42 ha-430216 kubelet[670]:  > podSandboxID="c7e2493b45b33500a696c69a54e5c9459bf12c1c2807d02611a3a297916303fe"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727596     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:23:42 ha-430216 kubelet[670]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:42 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727630     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.698969     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725313     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:23:43 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:43 ha-430216 kubelet[670]:  > podSandboxID="ffdca120ab327ae1141443ec3ae02f29163bf9f30626a405e44964b17b1c7055"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725428     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:23:43 ha-430216 kubelet[670]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:43 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725479     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:23:46 ha-430216 kubelet[670]: E1008 15:23:46.717569     670 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:23:48 ha-430216 kubelet[670]: E1008 15:23:48.280070     670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8d12e4fbf3bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:17:46.685666237 +0000 UTC m=+0.078853330,LastTimestamp:2025-10-08 15:17:46.685666237 +0000 UTC m=+0.078853330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (300.669801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 node delete m03 --alsologtostderr -v 5: exit status 103 (260.161054ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-430216 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-430216"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:23:49.611945  169391 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:49.612062  169391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:49.612070  169391 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:49.612074  169391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:49.612285  169391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:49.612620  169391 mustload.go:65] Loading cluster: ha-430216
	I1008 15:23:49.612965  169391 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:49.613381  169391 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:49.631093  169391 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:23:49.631359  169391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:49.693279  169391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:23:49.683176784 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:49.693405  169391 api_server.go:166] Checking apiserver status ...
	I1008 15:23:49.693475  169391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:23:49.693519  169391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:49.711188  169391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	W1008 15:23:49.818465  169391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:23:49.820724  169391 out.go:179] * The control-plane node ha-430216 apiserver is not running: (state=Stopped)
	I1008 15:23:49.822249  169391 out.go:179]   To start a cluster, run: "minikube start -p ha-430216"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-430216 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
E1008 15:23:49.989401   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 2 (294.092293ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:23:49.871226  169487 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:49.871494  169487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:49.871505  169487 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:49.871509  169487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:49.871744  169487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:49.871923  169487 out.go:368] Setting JSON to false
	I1008 15:23:49.871952  169487 mustload.go:65] Loading cluster: ha-430216
	I1008 15:23:49.871997  169487 notify.go:220] Checking for updates...
	I1008 15:23:49.872274  169487 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:49.872287  169487 status.go:174] checking status of ha-430216 ...
	I1008 15:23:49.872744  169487 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:49.890604  169487 status.go:371] ha-430216 host status = "Running" (err=<nil>)
	I1008 15:23:49.890665  169487 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:23:49.891035  169487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:49.908675  169487 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:23:49.908996  169487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:23:49.909041  169487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:49.926916  169487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:50.029140  169487 ssh_runner.go:195] Run: systemctl --version
	I1008 15:23:50.035857  169487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:23:50.049388  169487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:50.105651  169487 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:23:50.0958899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:50.106189  169487 kubeconfig.go:125] found "ha-430216" server: "https://192.168.49.2:8443"
	I1008 15:23:50.106220  169487 api_server.go:166] Checking apiserver status ...
	I1008 15:23:50.106253  169487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1008 15:23:50.117097  169487 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:23:50.117129  169487 status.go:463] ha-430216 apiserver status = Running (err=<nil>)
	I1008 15:23:50.117143  169487 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165516,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:17:40.438320544Z",
	            "FinishedAt": "2025-10-08T15:17:39.294925999Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a859197b65a38409f07e70f3c98c669d775d8929557c4da2e83a4d313514263a",
	            "SandboxKey": "/var/run/docker/netns/a859197b65a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:01:a9:ea:cf:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "e35ced1eecbb689c7a373ec6a83d63c8613f8d6b045af1de4211947cfde7a915",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (297.653426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                      │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                             │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                          │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:17:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:17:40.199526  165314 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:40.199829  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.199841  165314 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:40.199845  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.200025  165314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:40.200506  165314 out.go:368] Setting JSON to false
	I1008 15:17:40.201472  165314 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10811,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:17:40.201578  165314 start.go:141] virtualization: kvm guest
	I1008 15:17:40.203913  165314 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:17:40.205535  165314 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:17:40.205583  165314 notify.go:220] Checking for updates...
	I1008 15:17:40.208565  165314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:17:40.210117  165314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:40.211622  165314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:17:40.213029  165314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:17:40.214476  165314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:17:40.216479  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:40.216629  165314 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:17:40.242539  165314 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:17:40.242667  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.304220  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.293786011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.304329  165314 docker.go:318] overlay module found
	I1008 15:17:40.306374  165314 out.go:179] * Using the docker driver based on existing profile
	I1008 15:17:40.307763  165314 start.go:305] selected driver: docker
	I1008 15:17:40.307785  165314 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:40.307880  165314 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:17:40.307983  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.364929  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.355521293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.365573  165314 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:17:40.365619  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:40.365678  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:40.365730  165314 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:17:40.367770  165314 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:17:40.369034  165314 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:17:40.370366  165314 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:17:40.371596  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:40.371635  165314 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:17:40.371651  165314 cache.go:58] Caching tarball of preloaded images
	I1008 15:17:40.371716  165314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:17:40.371748  165314 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:17:40.371756  165314 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:17:40.371872  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.392684  165314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:17:40.392707  165314 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:17:40.392735  165314 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:17:40.392762  165314 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:17:40.392820  165314 start.go:364] duration metric: took 40.317µs to acquireMachinesLock for "ha-430216"
	I1008 15:17:40.392840  165314 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:17:40.392844  165314 fix.go:54] fixHost starting: 
	I1008 15:17:40.393093  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.410344  165314 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:17:40.410395  165314 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:17:40.412417  165314 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:17:40.412507  165314 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:17:40.657462  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.676019  165314 kic.go:430] container "ha-430216" state is running.
	I1008 15:17:40.676351  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:40.696423  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.696761  165314 machine.go:93] provisionDockerMachine start ...
	I1008 15:17:40.696862  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:40.715440  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:40.715761  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:40.715779  165314 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:17:40.716557  165314 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36636->127.0.0.1:32788: read: connection reset by peer
	I1008 15:17:43.866807  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:43.866844  165314 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:17:43.866913  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:43.885755  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:43.886066  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:43.886085  165314 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:17:44.044811  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:44.044935  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.062657  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.062943  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.062962  165314 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:17:44.211403  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:17:44.211432  165314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:17:44.211462  165314 ubuntu.go:190] setting up certificates
	I1008 15:17:44.211481  165314 provision.go:84] configureAuth start
	I1008 15:17:44.211544  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:44.229072  165314 provision.go:143] copyHostCerts
	I1008 15:17:44.229109  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229137  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:17:44.229151  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229221  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:17:44.229317  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229336  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:17:44.229340  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229367  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:17:44.229432  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229473  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:17:44.229484  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229515  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:17:44.229587  165314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:17:44.499077  165314 provision.go:177] copyRemoteCerts
	I1008 15:17:44.499163  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:17:44.499212  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.516869  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:44.621363  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:17:44.621431  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:17:44.640000  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:17:44.640066  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:17:44.658584  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:17:44.658662  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 15:17:44.677694  165314 provision.go:87] duration metric: took 466.198036ms to configureAuth
	I1008 15:17:44.677721  165314 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:17:44.677906  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:44.678018  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.696317  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.696574  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.696594  165314 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:17:44.957182  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:17:44.957211  165314 machine.go:96] duration metric: took 4.260426846s to provisionDockerMachine
	I1008 15:17:44.957226  165314 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:17:44.957238  165314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:17:44.957296  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:17:44.957347  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.975366  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.079375  165314 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:17:45.083426  165314 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:17:45.083475  165314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:17:45.083489  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:17:45.083555  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:17:45.083654  165314 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:17:45.083668  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:17:45.083797  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:17:45.092110  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:45.110634  165314 start.go:296] duration metric: took 153.392527ms for postStartSetup
	I1008 15:17:45.110712  165314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:45.110746  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.128609  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.229014  165314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:17:45.233764  165314 fix.go:56] duration metric: took 4.840910167s for fixHost
	I1008 15:17:45.233790  165314 start.go:83] releasing machines lock for "ha-430216", held for 4.840957644s
	I1008 15:17:45.233848  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:45.251189  165314 ssh_runner.go:195] Run: cat /version.json
	I1008 15:17:45.251208  165314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:17:45.251250  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.251265  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.269790  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.270642  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.424404  165314 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:45.431092  165314 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:17:45.467246  165314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:17:45.472156  165314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:17:45.472216  165314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:17:45.480408  165314 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:17:45.480432  165314 start.go:495] detecting cgroup driver to use...
	I1008 15:17:45.480483  165314 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:17:45.480532  165314 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:17:45.494905  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:17:45.507311  165314 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:17:45.507372  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:17:45.522294  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:17:45.535383  165314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:17:45.613394  165314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:17:45.698519  165314 docker.go:234] disabling docker service ...
	I1008 15:17:45.698592  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:17:45.712972  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:17:45.725410  165314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:17:45.808999  165314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:17:45.890393  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:17:45.903437  165314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:17:45.918341  165314 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:17:45.918398  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.928311  165314 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:17:45.928386  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.938723  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.948562  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.958637  165314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:17:45.967780  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.977284  165314 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.986240  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.995533  165314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:17:46.003222  165314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:17:46.011206  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.088962  165314 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:17:46.194350  165314 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:17:46.194427  165314 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:17:46.198496  165314 start.go:563] Will wait 60s for crictl version
	I1008 15:17:46.198558  165314 ssh_runner.go:195] Run: which crictl
	I1008 15:17:46.202386  165314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:17:46.228548  165314 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:17:46.228621  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.256833  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.288593  165314 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:17:46.289934  165314 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:17:46.307676  165314 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:17:46.312234  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.324342  165314 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:17:46.324511  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:46.324585  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.355836  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.355859  165314 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:17:46.355919  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.382577  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.382601  165314 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:17:46.382609  165314 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:17:46.382723  165314 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:17:46.382802  165314 ssh_runner.go:195] Run: crio config
	I1008 15:17:46.428099  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:46.428124  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:46.428145  165314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:17:46.428173  165314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:17:46.428324  165314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:17:46.428406  165314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:17:46.436958  165314 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:17:46.437025  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:17:46.445838  165314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:17:46.458878  165314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:17:46.472075  165314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:17:46.485552  165314 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:17:46.489640  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.500389  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.578574  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:46.604149  165314 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:17:46.604181  165314 certs.go:195] generating shared ca certs ...
	I1008 15:17:46.604215  165314 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:46.604428  165314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:17:46.604510  165314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:17:46.604529  165314 certs.go:257] generating profile certs ...
	I1008 15:17:46.604662  165314 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:17:46.604697  165314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:17:46.604728  165314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 15:17:47.358821  165314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 ...
	I1008 15:17:47.358862  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92: {Name:mk5db33d068b68a4018c945a3cf387814181d041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359079  165314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 ...
	I1008 15:17:47.359099  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92: {Name:mk225894b1a1cad5b94eea81035f94b5877a9e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359220  165314 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:17:47.359416  165314 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:17:47.359616  165314 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:17:47.359637  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:17:47.359656  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:17:47.359681  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:17:47.359700  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:17:47.359717  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:17:47.359737  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:17:47.359753  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:17:47.359771  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:17:47.359839  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:17:47.359889  165314 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:17:47.359903  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:17:47.359938  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:17:47.359970  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:17:47.360003  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:17:47.360060  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:47.360098  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.360118  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.360137  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.360687  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:17:47.379230  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:17:47.397207  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:17:47.416101  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:17:47.434846  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:17:47.452646  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:17:47.471016  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:17:47.488663  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:17:47.506734  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:17:47.524704  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:17:47.542944  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:17:47.560522  165314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:17:47.573301  165314 ssh_runner.go:195] Run: openssl version
	I1008 15:17:47.579415  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:17:47.588940  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592844  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592911  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.627509  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:17:47.636174  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:17:47.645428  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649598  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649651  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.685089  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:17:47.693825  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:17:47.704633  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.709997  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.710062  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.752413  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:17:47.761256  165314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:17:47.765490  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:17:47.800513  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:17:47.834950  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:17:47.869178  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:17:47.904985  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:17:47.940157  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:17:47.975301  165314 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:47.975398  165314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:17:47.975497  165314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:17:48.003286  165314 cri.go:89] found id: ""
	I1008 15:17:48.003362  165314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:17:48.011838  165314 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:17:48.011861  165314 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:17:48.011915  165314 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:17:48.019689  165314 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:48.020188  165314 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.020357  165314 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:17:48.020889  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.021422  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.021950  165314 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:17:48.021991  165314 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:17:48.022003  165314 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:17:48.022008  165314 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:17:48.022011  165314 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:17:48.022010  165314 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:17:48.022367  165314 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:17:48.030350  165314 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:17:48.030384  165314 kubeadm.go:601] duration metric: took 18.515806ms to restartPrimaryControlPlane
	I1008 15:17:48.030391  165314 kubeadm.go:402] duration metric: took 55.10386ms to StartCluster
	I1008 15:17:48.030407  165314 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.030479  165314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.031062  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.031320  165314 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:17:48.031417  165314 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:17:48.031543  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:48.031550  165314 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:17:48.031579  165314 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:17:48.031542  165314 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:17:48.031697  165314 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:17:48.031741  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.031859  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.032220  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.035473  165314 out.go:179] * Verifying Kubernetes components...
	I1008 15:17:48.036689  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:48.051620  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.051985  165314 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:17:48.052033  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.052536  165314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:17:48.052546  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.053963  165314 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.053984  165314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:17:48.054040  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.079192  165314 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:17:48.079217  165314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:17:48.079284  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.080326  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.101694  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.146073  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:48.160230  165314 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:17:48.193165  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.212223  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.250131  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.250182  165314 retry.go:31] will retry after 167.936984ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:48.267393  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.267422  165314 retry.go:31] will retry after 358.217903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.418711  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.475364  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.475407  165314 retry.go:31] will retry after 446.950012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.626729  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.682981  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.683011  165314 retry.go:31] will retry after 531.317527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.923438  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.977935  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.977972  165314 retry.go:31] will retry after 650.888904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.214916  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:49.268803  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.268853  165314 retry.go:31] will retry after 736.958634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.629397  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:49.684331  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.684369  165314 retry.go:31] will retry after 676.827705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.006882  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.061009  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.061039  165314 retry.go:31] will retry after 545.238805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:50.161718  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:50.362321  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:50.417195  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.417229  165314 retry.go:31] will retry after 1.567260249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.606477  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.661410  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.661455  165314 retry.go:31] will retry after 1.443051142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:51.985236  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:52.040164  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.040193  165314 retry.go:31] will retry after 2.313802653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.105463  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:52.160492  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.160528  165314 retry.go:31] will retry after 1.660110088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:52.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:53.821608  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:53.876616  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:53.876662  165314 retry.go:31] will retry after 3.622883186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.354878  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:54.409389  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.409423  165314 retry.go:31] will retry after 4.24595112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:54.661241  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:17:57.161093  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:57.500619  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:57.554551  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:57.554585  165314 retry.go:31] will retry after 5.598675775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.656416  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:58.714339  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.714378  165314 retry.go:31] will retry after 5.615284906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:59.161298  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:01.161635  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:03.153501  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:03.207250  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:03.207281  165314 retry.go:31] will retry after 5.699792472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:03.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:04.330762  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:04.388974  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:04.389006  165314 retry.go:31] will retry after 4.649889332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:06.161419  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:08.661313  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:08.907702  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:08.963026  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:08.963063  165314 retry.go:31] will retry after 13.849348803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.039214  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:09.093068  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.093106  165314 retry.go:31] will retry after 13.971081611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:11.160802  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:13.161178  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:15.661334  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:18.161260  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:20.661258  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:22.813360  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:22.868201  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:22.868242  165314 retry.go:31] will retry after 18.044250351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.064572  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:23.119242  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.119274  165314 retry.go:31] will retry after 13.659632674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:23.160839  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:25.161596  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:27.661763  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:30.160996  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:32.161239  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:34.161289  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:36.161816  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:36.780076  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:36.835066  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:36.835125  165314 retry.go:31] will retry after 24.301634838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:38.661408  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:40.661719  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:40.913117  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:40.970652  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:40.970689  165314 retry.go:31] will retry after 29.623667492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:43.161261  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:45.661475  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:48.161429  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:50.661022  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:52.661560  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:55.161328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:57.661202  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:00.160922  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:01.137748  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:01.192234  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:01.192272  165314 retry.go:31] will retry after 46.151732803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:02.161479  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:04.661301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:07.161112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:09.161773  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:10.595151  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:10.649317  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:10.649348  165314 retry.go:31] will retry after 34.509482074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:11.661098  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:13.661164  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:16.160980  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:18.161117  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:20.661118  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:23.161018  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:25.661094  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:28.160962  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:30.660939  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:33.160970  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:35.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:38.161485  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:40.161694  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:42.660977  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:44.661727  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:45.159116  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:45.213330  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:45.213473  165314 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:19:47.161077  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:47.344346  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:47.398881  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:47.398996  165314 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:19:47.402318  165314 out.go:179] * Enabled addons: 
	I1008 15:19:47.403650  165314 addons.go:514] duration metric: took 1m59.372237661s for enable addons: enabled=[]
	W1008 15:19:49.161145  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:51.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:53.661015  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:56.161063  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:58.161490  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:00.161646  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:02.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:05.160822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:07.160868  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:09.660834  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:11.660990  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:14.160903  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:16.161003  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:18.161359  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:20.661641  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:22.661701  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:24.661824  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:27.160924  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:29.660942  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:31.661112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:34.160994  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:36.161213  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:38.661173  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:41.160956  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:43.660831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:45.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:48.161052  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:50.661054  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:53.160840  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:55.660921  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:58.160916  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:00.161810  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:02.660851  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:04.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:07.160828  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:09.161076  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:11.661048  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:14.160838  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:16.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:18.161296  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:20.661517  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:23.161552  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:25.161729  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:27.660935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:29.661209  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:32.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:34.660964  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:36.661288  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:38.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:41.160885  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:43.660822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:45.661636  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:47.661793  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:50.160858  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:52.160906  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:54.660820  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:56.660941  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:59.160837  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:01.160968  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:03.660914  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:05.661127  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:08.161235  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:10.661328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:13.160998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:15.661224  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:18.161405  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:20.661507  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:22.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:25.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:27.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:29.161301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:31.161696  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:33.161814  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:35.661109  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:38.161505  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:40.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:43.160905  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:45.161935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:47.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:49.661627  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:52.160997  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:54.660969  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:56.661103  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:58.661532  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:01.160960  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:03.161002  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:05.161043  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:07.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:09.161741  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:11.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:13.661045  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:15.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:17.661374  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:20.160831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:22.161040  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:24.660973  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:26.661132  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:28.661354  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:30.661576  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:32.661875  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:35.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:37.660961  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:39.661150  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:42.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:44.660882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:46.661243  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:23:48.160817  165314 node_ready.go:38] duration metric: took 6m0.000537691s for node "ha-430216" to be "Ready" ...
	I1008 15:23:48.163540  165314 out.go:203] 
	W1008 15:23:48.165619  165314 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:23:48.165638  165314 out.go:285] * 
	W1008 15:23:48.167470  165314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:23:48.169005  165314 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.723360467Z" level=info msg="createCtr: removing container 3d25e29407b0203d9a9c6947fdf94fd8c9ff9ed7de139f14161eb81832fc2662" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.723396505Z" level=info msg="createCtr: deleting container 3d25e29407b0203d9a9c6947fdf94fd8c9ff9ed7de139f14161eb81832fc2662 from storage" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.725857077Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.699756741Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a3da441f-a3c0-455a-901b-e7775d5ce11f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.700761676Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b04006d-4701-4e1e-adec-c15f8af9118f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.701717413Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.702118129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.706697114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.707282936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.723578924Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.724988347Z" level=info msg="createCtr: deleting container ID dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8 from idIndex" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.725028065Z" level=info msg="createCtr: removing container dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.725061393Z" level=info msg="createCtr: deleting container dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8 from storage" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.727163909Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.699561286Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=501d827b-6905-409a-8de5-af070b4d21e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.700580265Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=60ff1316-fdf3-4f8b-b900-2998c6dab5c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.701518645Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.701736879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.70492584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.705521913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.721364493Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.722696209Z" level=info msg="createCtr: deleting container ID 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8 from idIndex" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.72272718Z" level=info msg="createCtr: removing container 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.72275607Z" level=info msg="createCtr: deleting container 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8 from storage" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.724994672Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:23:51.005698    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:51.006291    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:51.007890    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:51.008343    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:51.010671    2187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:23:51 up  3:06,  0 user,  load average: 0.00, 0.04, 0.11
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.337933     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:23:42 ha-430216 kubelet[670]: I1008 15:23:42.511486     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.511906     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.554027     670 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.699290     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727486     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:23:42 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:42 ha-430216 kubelet[670]:  > podSandboxID="c7e2493b45b33500a696c69a54e5c9459bf12c1c2807d02611a3a297916303fe"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727596     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:23:42 ha-430216 kubelet[670]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:42 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727630     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.698969     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725313     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:23:43 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:43 ha-430216 kubelet[670]:  > podSandboxID="ffdca120ab327ae1141443ec3ae02f29163bf9f30626a405e44964b17b1c7055"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725428     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:23:43 ha-430216 kubelet[670]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:43 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725479     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:23:46 ha-430216 kubelet[670]: E1008 15:23:46.717569     670 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:23:48 ha-430216 kubelet[670]: E1008 15:23:48.280070     670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8d12e4fbf3bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:17:46.685666237 +0000 UTC m=+0.078853330,LastTimestamp:2025-10-08 15:17:46.685666237 +0000 UTC m=+0.078853330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:23:49 ha-430216 kubelet[670]: E1008 15:23:49.339197     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:23:49 ha-430216 kubelet[670]: I1008 15:23:49.514026     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:23:49 ha-430216 kubelet[670]: E1008 15:23:49.514554     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (290.137727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-430216" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165516,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:17:40.438320544Z",
	            "FinishedAt": "2025-10-08T15:17:39.294925999Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a859197b65a38409f07e70f3c98c669d775d8929557c4da2e83a4d313514263a",
	            "SandboxKey": "/var/run/docker/netns/a859197b65a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:01:a9:ea:cf:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "e35ced1eecbb689c7a373ec6a83d63c8613f8d6b045af1de4211947cfde7a915",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (292.466924ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-430216 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- rollout status deployment/busybox                      │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                             │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                       │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                          │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                            │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:17:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:17:40.199526  165314 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:17:40.199829  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.199841  165314 out.go:374] Setting ErrFile to fd 2...
	I1008 15:17:40.199845  165314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:17:40.200025  165314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:17:40.200506  165314 out.go:368] Setting JSON to false
	I1008 15:17:40.201472  165314 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10811,"bootTime":1759925849,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:17:40.201578  165314 start.go:141] virtualization: kvm guest
	I1008 15:17:40.203913  165314 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:17:40.205535  165314 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:17:40.205583  165314 notify.go:220] Checking for updates...
	I1008 15:17:40.208565  165314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:17:40.210117  165314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:40.211622  165314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:17:40.213029  165314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:17:40.214476  165314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:17:40.216479  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:40.216629  165314 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:17:40.242539  165314 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:17:40.242667  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.304220  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.293786011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.304329  165314 docker.go:318] overlay module found
	I1008 15:17:40.306374  165314 out.go:179] * Using the docker driver based on existing profile
	I1008 15:17:40.307763  165314 start.go:305] selected driver: docker
	I1008 15:17:40.307785  165314 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:40.307880  165314 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:17:40.307983  165314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:17:40.364929  165314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:17:40.355521293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:17:40.365573  165314 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:17:40.365619  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:40.365678  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:40.365730  165314 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:17:40.367770  165314 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:17:40.369034  165314 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:17:40.370366  165314 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:17:40.371596  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:40.371635  165314 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:17:40.371651  165314 cache.go:58] Caching tarball of preloaded images
	I1008 15:17:40.371716  165314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:17:40.371748  165314 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:17:40.371756  165314 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:17:40.371872  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.392684  165314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:17:40.392707  165314 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:17:40.392735  165314 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:17:40.392762  165314 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:17:40.392820  165314 start.go:364] duration metric: took 40.317µs to acquireMachinesLock for "ha-430216"
	I1008 15:17:40.392840  165314 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:17:40.392844  165314 fix.go:54] fixHost starting: 
	I1008 15:17:40.393093  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.410344  165314 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:17:40.410395  165314 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:17:40.412417  165314 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:17:40.412507  165314 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:17:40.657462  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:40.676019  165314 kic.go:430] container "ha-430216" state is running.
	I1008 15:17:40.676351  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:40.696423  165314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:17:40.696761  165314 machine.go:93] provisionDockerMachine start ...
	I1008 15:17:40.696862  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:40.715440  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:40.715761  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:40.715779  165314 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:17:40.716557  165314 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36636->127.0.0.1:32788: read: connection reset by peer
	I1008 15:17:43.866807  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:43.866844  165314 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:17:43.866913  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:43.885755  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:43.886066  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:43.886085  165314 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:17:44.044811  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:17:44.044935  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.062657  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.062943  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.062962  165314 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:17:44.211403  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:17:44.211432  165314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:17:44.211462  165314 ubuntu.go:190] setting up certificates
	I1008 15:17:44.211481  165314 provision.go:84] configureAuth start
	I1008 15:17:44.211544  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:44.229072  165314 provision.go:143] copyHostCerts
	I1008 15:17:44.229109  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229137  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:17:44.229151  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:17:44.229221  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:17:44.229317  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229336  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:17:44.229340  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:17:44.229367  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:17:44.229432  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229473  165314 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:17:44.229484  165314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:17:44.229515  165314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:17:44.229587  165314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:17:44.499077  165314 provision.go:177] copyRemoteCerts
	I1008 15:17:44.499163  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:17:44.499212  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.516869  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:44.621363  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:17:44.621431  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:17:44.640000  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:17:44.640066  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:17:44.658584  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:17:44.658662  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 15:17:44.677694  165314 provision.go:87] duration metric: took 466.198036ms to configureAuth
	I1008 15:17:44.677721  165314 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:17:44.677906  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:44.678018  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.696317  165314 main.go:141] libmachine: Using SSH client type: native
	I1008 15:17:44.696574  165314 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1008 15:17:44.696594  165314 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:17:44.957182  165314 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:17:44.957211  165314 machine.go:96] duration metric: took 4.260426846s to provisionDockerMachine
	I1008 15:17:44.957226  165314 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:17:44.957238  165314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:17:44.957296  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:17:44.957347  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:44.975366  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.079375  165314 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:17:45.083426  165314 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:17:45.083475  165314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:17:45.083489  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:17:45.083555  165314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:17:45.083654  165314 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:17:45.083668  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:17:45.083797  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:17:45.092110  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:45.110634  165314 start.go:296] duration metric: took 153.392527ms for postStartSetup
	I1008 15:17:45.110712  165314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:17:45.110746  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.128609  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.229014  165314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:17:45.233764  165314 fix.go:56] duration metric: took 4.840910167s for fixHost
	I1008 15:17:45.233790  165314 start.go:83] releasing machines lock for "ha-430216", held for 4.840957644s
	I1008 15:17:45.233848  165314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:17:45.251189  165314 ssh_runner.go:195] Run: cat /version.json
	I1008 15:17:45.251208  165314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:17:45.251250  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.251265  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:45.269790  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.270642  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:45.424404  165314 ssh_runner.go:195] Run: systemctl --version
	I1008 15:17:45.431092  165314 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:17:45.467246  165314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:17:45.472156  165314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:17:45.472216  165314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:17:45.480408  165314 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:17:45.480432  165314 start.go:495] detecting cgroup driver to use...
	I1008 15:17:45.480483  165314 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:17:45.480532  165314 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:17:45.494905  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:17:45.507311  165314 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:17:45.507372  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:17:45.522294  165314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:17:45.535383  165314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:17:45.613394  165314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:17:45.698519  165314 docker.go:234] disabling docker service ...
	I1008 15:17:45.698592  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:17:45.712972  165314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:17:45.725410  165314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:17:45.808999  165314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:17:45.890393  165314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:17:45.903437  165314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:17:45.918341  165314 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:17:45.918398  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.928311  165314 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:17:45.928386  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.938723  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.948562  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.958637  165314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:17:45.967780  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.977284  165314 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.986240  165314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:17:45.995533  165314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:17:46.003222  165314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:17:46.011206  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.088962  165314 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:17:46.194350  165314 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:17:46.194427  165314 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:17:46.198496  165314 start.go:563] Will wait 60s for crictl version
	I1008 15:17:46.198558  165314 ssh_runner.go:195] Run: which crictl
	I1008 15:17:46.202386  165314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:17:46.228548  165314 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:17:46.228621  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.256833  165314 ssh_runner.go:195] Run: crio --version
	I1008 15:17:46.288593  165314 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:17:46.289934  165314 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:17:46.307676  165314 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:17:46.312234  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.324342  165314 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:17:46.324511  165314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:17:46.324585  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.355836  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.355859  165314 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:17:46.355919  165314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:17:46.382577  165314 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:17:46.382601  165314 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:17:46.382609  165314 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:17:46.382723  165314 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:17:46.382802  165314 ssh_runner.go:195] Run: crio config
	I1008 15:17:46.428099  165314 cni.go:84] Creating CNI manager for ""
	I1008 15:17:46.428124  165314 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:17:46.428145  165314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:17:46.428173  165314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:17:46.428324  165314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:17:46.428406  165314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:17:46.436958  165314 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:17:46.437025  165314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:17:46.445838  165314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:17:46.458878  165314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:17:46.472075  165314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:17:46.485552  165314 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:17:46.489640  165314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:17:46.500389  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:46.578574  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:46.604149  165314 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:17:46.604181  165314 certs.go:195] generating shared ca certs ...
	I1008 15:17:46.604215  165314 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:46.604428  165314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:17:46.604510  165314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:17:46.604529  165314 certs.go:257] generating profile certs ...
	I1008 15:17:46.604662  165314 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:17:46.604697  165314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:17:46.604728  165314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 15:17:47.358821  165314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 ...
	I1008 15:17:47.358862  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92: {Name:mk5db33d068b68a4018c945a3cf387814181d041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359079  165314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 ...
	I1008 15:17:47.359099  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92: {Name:mk225894b1a1cad5b94eea81035f94b5877a9e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:47.359220  165314 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt
	I1008 15:17:47.359416  165314 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92 -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key
	I1008 15:17:47.359616  165314 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:17:47.359637  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:17:47.359656  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:17:47.359681  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:17:47.359700  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:17:47.359717  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:17:47.359737  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:17:47.359753  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:17:47.359771  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:17:47.359839  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:17:47.359889  165314 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:17:47.359903  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:17:47.359938  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:17:47.359970  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:17:47.360003  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:17:47.360060  165314 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:17:47.360098  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.360118  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.360137  165314 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.360687  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:17:47.379230  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:17:47.397207  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:17:47.416101  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:17:47.434846  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:17:47.452646  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:17:47.471016  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:17:47.488663  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:17:47.506734  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:17:47.524704  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:17:47.542944  165314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:17:47.560522  165314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:17:47.573301  165314 ssh_runner.go:195] Run: openssl version
	I1008 15:17:47.579415  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:17:47.588940  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592844  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.592911  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:17:47.627509  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:17:47.636174  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:17:47.645428  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649598  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.649651  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:17:47.685089  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:17:47.693825  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:17:47.704633  165314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.709997  165314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.710062  165314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:17:47.752413  165314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:17:47.761256  165314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:17:47.765490  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:17:47.800513  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:17:47.834950  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:17:47.869178  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:17:47.904985  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:17:47.940157  165314 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:17:47.975301  165314 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:17:47.975398  165314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:17:47.975497  165314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:17:48.003286  165314 cri.go:89] found id: ""
	I1008 15:17:48.003362  165314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:17:48.011838  165314 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:17:48.011861  165314 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:17:48.011915  165314 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:17:48.019689  165314 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:17:48.020188  165314 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.020357  165314 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:17:48.020889  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.021422  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.021950  165314 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:17:48.021991  165314 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:17:48.022003  165314 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:17:48.022008  165314 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:17:48.022011  165314 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:17:48.022010  165314 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:17:48.022367  165314 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:17:48.030350  165314 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:17:48.030384  165314 kubeadm.go:601] duration metric: took 18.515806ms to restartPrimaryControlPlane
	I1008 15:17:48.030391  165314 kubeadm.go:402] duration metric: took 55.10386ms to StartCluster
	I1008 15:17:48.030407  165314 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.030479  165314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:17:48.031062  165314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:17:48.031320  165314 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:17:48.031417  165314 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:17:48.031543  165314 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:17:48.031550  165314 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:17:48.031579  165314 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:17:48.031542  165314 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:17:48.031697  165314 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:17:48.031741  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.031859  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.032220  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.035473  165314 out.go:179] * Verifying Kubernetes components...
	I1008 15:17:48.036689  165314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:17:48.051620  165314 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:17:48.051985  165314 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:17:48.052033  165314 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:17:48.052536  165314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:17:48.052546  165314 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:17:48.053963  165314 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.053984  165314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:17:48.054040  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.079192  165314 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:17:48.079217  165314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:17:48.079284  165314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:17:48.080326  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.101694  165314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:17:48.146073  165314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:17:48.160230  165314 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:17:48.193165  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:17:48.212223  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.250131  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.250182  165314 retry.go:31] will retry after 167.936984ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:48.267393  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.267422  165314 retry.go:31] will retry after 358.217903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.418711  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.475364  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.475407  165314 retry.go:31] will retry after 446.950012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.626729  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:48.682981  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.683011  165314 retry.go:31] will retry after 531.317527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.923438  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:48.977935  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:48.977972  165314 retry.go:31] will retry after 650.888904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.214916  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:49.268803  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.268853  165314 retry.go:31] will retry after 736.958634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.629397  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:49.684331  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:49.684369  165314 retry.go:31] will retry after 676.827705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.006882  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.061009  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.061039  165314 retry.go:31] will retry after 545.238805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:50.161718  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:50.362321  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:50.417195  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.417229  165314 retry.go:31] will retry after 1.567260249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.606477  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:50.661410  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:50.661455  165314 retry.go:31] will retry after 1.443051142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:51.985236  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:52.040164  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.040193  165314 retry.go:31] will retry after 2.313802653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.105463  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:52.160492  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:52.160528  165314 retry.go:31] will retry after 1.660110088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:52.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:53.821608  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:53.876616  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:53.876662  165314 retry.go:31] will retry after 3.622883186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.354878  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:54.409389  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:54.409423  165314 retry.go:31] will retry after 4.24595112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:54.661241  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:17:57.161093  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:17:57.500619  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:17:57.554551  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:57.554585  165314 retry.go:31] will retry after 5.598675775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.656416  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:17:58.714339  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:17:58.714378  165314 retry.go:31] will retry after 5.615284906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:17:59.161298  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:01.161635  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:03.153501  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:03.207250  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:03.207281  165314 retry.go:31] will retry after 5.699792472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:03.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:04.330762  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:04.388974  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:04.389006  165314 retry.go:31] will retry after 4.649889332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:06.161419  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:08.661313  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:08.907702  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:08.963026  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:08.963063  165314 retry.go:31] will retry after 13.849348803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.039214  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:09.093068  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:09.093106  165314 retry.go:31] will retry after 13.971081611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:11.160802  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:13.161178  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:15.661334  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:18.161260  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:20.661258  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:22.813360  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:22.868201  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:22.868242  165314 retry.go:31] will retry after 18.044250351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.064572  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:23.119242  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:23.119274  165314 retry.go:31] will retry after 13.659632674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:23.160839  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:25.161596  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:27.661763  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:30.160996  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:32.161239  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:34.161289  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:36.161816  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:36.780076  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:18:36.835066  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:36.835125  165314 retry.go:31] will retry after 24.301634838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:38.661408  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:40.661719  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:18:40.913117  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:18:40.970652  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:18:40.970689  165314 retry.go:31] will retry after 29.623667492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:18:43.161261  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:45.661475  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:48.161429  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:50.661022  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:52.661560  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:55.161328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:18:57.661202  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:00.160922  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:01.137748  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:01.192234  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:01.192272  165314 retry.go:31] will retry after 46.151732803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:02.161479  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:04.661301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:07.161112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:09.161773  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:10.595151  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:10.649317  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:19:10.649348  165314 retry.go:31] will retry after 34.509482074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:11.661098  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:13.661164  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:16.160980  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:18.161117  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:20.661118  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:23.161018  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:25.661094  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:28.160962  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:30.660939  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:33.160970  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:35.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:38.161485  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:40.161694  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:42.660977  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:44.661727  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:45.159116  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:19:45.213330  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:45.213473  165314 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:19:47.161077  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:19:47.344346  165314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:19:47.398881  165314 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:19:47.398996  165314 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:19:47.402318  165314 out.go:179] * Enabled addons: 
	I1008 15:19:47.403650  165314 addons.go:514] duration metric: took 1m59.372237661s for enable addons: enabled=[]
	W1008 15:19:49.161145  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:51.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:53.661015  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:56.161063  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:19:58.161490  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:00.161646  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:02.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:05.160822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:07.160868  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:09.660834  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:11.660990  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:14.160903  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:16.161003  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:18.161359  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:20.661641  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:22.661701  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:24.661824  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:27.160924  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:29.660942  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:31.661112  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:34.160994  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:36.161213  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:38.661173  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:41.160956  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:43.660831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:45.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:48.161052  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:50.661054  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:53.160840  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:55.660921  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:20:58.160916  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:00.161810  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:02.660851  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:04.661882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:07.160828  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:09.161076  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:11.661048  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:14.160838  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:16.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:18.161296  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:20.661517  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:23.161552  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:25.161729  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:27.660935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:29.661209  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:32.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:34.660964  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:36.661288  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:38.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:41.160885  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:43.660822  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:45.661636  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:47.661793  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:50.160858  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:52.160906  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:54.660820  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:56.660941  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:21:59.160837  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:01.160968  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:03.660914  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:05.661127  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:08.161235  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:10.661328  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:13.160998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:15.661224  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:18.161405  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:20.661507  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:22.661568  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:25.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:27.161060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:29.161301  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:31.161696  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:33.161814  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:35.661109  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:38.161505  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:40.661023  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:43.160905  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:45.161935  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:47.661060  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:49.661627  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:52.160997  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:54.660969  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:56.661103  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:22:58.661532  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:01.160960  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:03.161002  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:05.161043  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:07.161211  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:09.161741  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:11.660998  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:13.661045  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:15.661278  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:17.661374  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:20.160831  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:22.161040  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:24.660973  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:26.661132  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:28.661354  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:30.661576  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:32.661875  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:35.160936  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:37.660961  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:39.661150  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:42.161007  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:44.660882  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:23:46.661243  165314 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:23:48.160817  165314 node_ready.go:38] duration metric: took 6m0.000537691s for node "ha-430216" to be "Ready" ...
	I1008 15:23:48.163540  165314 out.go:203] 
	W1008 15:23:48.165619  165314 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:23:48.165638  165314 out.go:285] * 
	W1008 15:23:48.167470  165314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:23:48.169005  165314 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.723360467Z" level=info msg="createCtr: removing container 3d25e29407b0203d9a9c6947fdf94fd8c9ff9ed7de139f14161eb81832fc2662" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.723396505Z" level=info msg="createCtr: deleting container 3d25e29407b0203d9a9c6947fdf94fd8c9ff9ed7de139f14161eb81832fc2662 from storage" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:40 ha-430216 crio[521]: time="2025-10-08T15:23:40.725857077Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=67c3b9a7-d3bd-4d55-bcf2-03288e18779d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.699756741Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a3da441f-a3c0-455a-901b-e7775d5ce11f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.700761676Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b04006d-4701-4e1e-adec-c15f8af9118f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.701717413Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.702118129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.706697114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.707282936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.723578924Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.724988347Z" level=info msg="createCtr: deleting container ID dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8 from idIndex" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.725028065Z" level=info msg="createCtr: removing container dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.725061393Z" level=info msg="createCtr: deleting container dbfc7eb99647c5574f7e0739f2298c21dc7349b6daa95e7791ccbb7c9ff2b1c8 from storage" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:42 ha-430216 crio[521]: time="2025-10-08T15:23:42.727163909Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=dec69787-1999-4870-9924-85aa142337b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.699561286Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=501d827b-6905-409a-8de5-af070b4d21e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.700580265Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=60ff1316-fdf3-4f8b-b900-2998c6dab5c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.701518645Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.701736879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.70492584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.705521913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.721364493Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.722696209Z" level=info msg="createCtr: deleting container ID 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8 from idIndex" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.72272718Z" level=info msg="createCtr: removing container 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.72275607Z" level=info msg="createCtr: deleting container 63bd1d16782a7f9fb46504e25c8b6619f97e2242fa14bdb64a3bd2afa2d28ef8 from storage" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:23:43 ha-430216 crio[521]: time="2025-10-08T15:23:43.724994672Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=b4f5f415-a25c-4dd2-a965-4c1e0e1c423e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:23:52.588949    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:52.589502    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:52.591151    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:52.591621    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:23:52.593206    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:23:52 up  3:06,  0 user,  load average: 0.08, 0.06, 0.12
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.337933     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:23:42 ha-430216 kubelet[670]: I1008 15:23:42.511486     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.511906     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.554027     670 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.699290     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727486     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:23:42 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:42 ha-430216 kubelet[670]:  > podSandboxID="c7e2493b45b33500a696c69a54e5c9459bf12c1c2807d02611a3a297916303fe"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727596     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:23:42 ha-430216 kubelet[670]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:42 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:42 ha-430216 kubelet[670]: E1008 15:23:42.727630     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.698969     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725313     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:23:43 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:43 ha-430216 kubelet[670]:  > podSandboxID="ffdca120ab327ae1141443ec3ae02f29163bf9f30626a405e44964b17b1c7055"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725428     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:23:43 ha-430216 kubelet[670]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:23:43 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:23:43 ha-430216 kubelet[670]: E1008 15:23:43.725479     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:23:46 ha-430216 kubelet[670]: E1008 15:23:46.717569     670 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:23:48 ha-430216 kubelet[670]: E1008 15:23:48.280070     670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8d12e4fbf3bd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:17:46.685666237 +0000 UTC m=+0.078853330,LastTimestamp:2025-10-08 15:17:46.685666237 +0000 UTC m=+0.078853330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:23:49 ha-430216 kubelet[670]: E1008 15:23:49.339197     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:23:49 ha-430216 kubelet[670]: I1008 15:23:49.514026     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:23:49 ha-430216 kubelet[670]: E1008 15:23:49.514554     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (297.964293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-430216 stop --alsologtostderr -v 5: (1.214850706s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5: exit status 7 (65.715905ms)

                                                
                                                
-- stdout --
	ha-430216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:23:54.238296  170874 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:54.238596  170874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.238607  170874 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:54.238612  170874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.238867  170874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:54.239100  170874 out.go:368] Setting JSON to false
	I1008 15:23:54.239133  170874 mustload.go:65] Loading cluster: ha-430216
	I1008 15:23:54.239291  170874 notify.go:220] Checking for updates...
	I1008 15:23:54.239589  170874 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:54.239609  170874 status.go:174] checking status of ha-430216 ...
	I1008 15:23:54.240112  170874 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.257240  170874 status.go:371] ha-430216 host status = "Stopped" (err=<nil>)
	I1008 15:23:54.257261  170874 status.go:384] host is not running, skipping remaining checks
	I1008 15:23:54.257267  170874 status.go:176] ha-430216 status: &{Name:ha-430216 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5": ha-430216
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5": ha-430216
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-430216 status --alsologtostderr -v 5": ha-430216
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:17:40.438320544Z",
	            "FinishedAt": "2025-10-08T15:23:53.312815942Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 7 (67.809757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-430216" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1008 15:27:26.903653   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.080906335s)

                                                
                                                
-- stdout --
	* [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:23:54.390098  170932 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:54.390354  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390364  170932 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:54.390369  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390587  170932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:54.391035  170932 out.go:368] Setting JSON to false
	I1008 15:23:54.391904  170932 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11185,"bootTime":1759925849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:23:54.392000  170932 start.go:141] virtualization: kvm guest
	I1008 15:23:54.394179  170932 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:23:54.395670  170932 notify.go:220] Checking for updates...
	I1008 15:23:54.395796  170932 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:23:54.397240  170932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:23:54.398569  170932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:23:54.399837  170932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:23:54.401102  170932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:23:54.402344  170932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:23:54.404021  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:54.404562  170932 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:23:54.427962  170932 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:23:54.428101  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.482745  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.472714788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.482901  170932 docker.go:318] overlay module found
	I1008 15:23:54.484784  170932 out.go:179] * Using the docker driver based on existing profile
	I1008 15:23:54.486099  170932 start.go:305] selected driver: docker
	I1008 15:23:54.486113  170932 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:23:54.486218  170932 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:23:54.486309  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.544832  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.535081224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.545438  170932 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:23:54.545485  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:23:54.545534  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:23:54.545577  170932 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:23:54.547619  170932 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:23:54.548799  170932 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:23:54.550084  170932 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:23:54.551306  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:23:54.551343  170932 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:23:54.551354  170932 cache.go:58] Caching tarball of preloaded images
	I1008 15:23:54.551396  170932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:23:54.551479  170932 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:23:54.551495  170932 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:23:54.551611  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.571805  170932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:23:54.571832  170932 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:23:54.571847  170932 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:23:54.571871  170932 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:23:54.571935  170932 start.go:364] duration metric: took 46.811µs to acquireMachinesLock for "ha-430216"
	I1008 15:23:54.571952  170932 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:23:54.571957  170932 fix.go:54] fixHost starting: 
	I1008 15:23:54.572177  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.590507  170932 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:23:54.590541  170932 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:23:54.592367  170932 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:23:54.592465  170932 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:23:54.836670  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.855000  170932 kic.go:430] container "ha-430216" state is running.
	I1008 15:23:54.855424  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:54.872582  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.872800  170932 machine.go:93] provisionDockerMachine start ...
	I1008 15:23:54.872862  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:54.890640  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:54.890934  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:54.890952  170932 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:23:54.891655  170932 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34990->127.0.0.1:32793: read: connection reset by peer
	I1008 15:23:58.039834  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.039875  170932 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:23:58.039947  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.058681  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.058904  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.058916  170932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:23:58.215272  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.215342  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.232894  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.233113  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.233130  170932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:23:58.379259  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:23:58.379290  170932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:23:58.379314  170932 ubuntu.go:190] setting up certificates
	I1008 15:23:58.379327  170932 provision.go:84] configureAuth start
	I1008 15:23:58.379406  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:58.396766  170932 provision.go:143] copyHostCerts
	I1008 15:23:58.396820  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396849  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:23:58.396858  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396924  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:23:58.397017  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397036  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:23:58.397043  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397070  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:23:58.397136  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397153  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:23:58.397159  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397183  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:23:58.397247  170932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:23:58.536180  170932 provision.go:177] copyRemoteCerts
	I1008 15:23:58.536249  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:23:58.536293  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.554351  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:58.657806  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:23:58.657871  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:23:58.675737  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:23:58.675790  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:23:58.692969  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:23:58.693030  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:23:58.710763  170932 provision.go:87] duration metric: took 331.416748ms to configureAuth
	I1008 15:23:58.710798  170932 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:23:58.711012  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:58.711117  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.728810  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.729089  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.729109  170932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:23:58.987429  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:23:58.987476  170932 machine.go:96] duration metric: took 4.114660829s to provisionDockerMachine
	I1008 15:23:58.987492  170932 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:23:58.987506  170932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:23:58.987579  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:23:58.987638  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.004627  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.108395  170932 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:23:59.111973  170932 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:23:59.111998  170932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:23:59.112007  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:23:59.112055  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:23:59.112144  170932 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:23:59.112167  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:23:59.112248  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:23:59.119933  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:23:59.137911  170932 start.go:296] duration metric: took 150.401166ms for postStartSetup
	I1008 15:23:59.137987  170932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:23:59.138020  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.155852  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.255756  170932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:23:59.260399  170932 fix.go:56] duration metric: took 4.688432219s for fixHost
	I1008 15:23:59.260429  170932 start.go:83] releasing machines lock for "ha-430216", held for 4.688483389s
	I1008 15:23:59.260521  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:59.277825  170932 ssh_runner.go:195] Run: cat /version.json
	I1008 15:23:59.277877  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.277923  170932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:23:59.278022  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.295429  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.296320  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.446135  170932 ssh_runner.go:195] Run: systemctl --version
	I1008 15:23:59.452641  170932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:23:59.487637  170932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:23:59.492434  170932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:23:59.492513  170932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:23:59.500423  170932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:23:59.500461  170932 start.go:495] detecting cgroup driver to use...
	I1008 15:23:59.500493  170932 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:23:59.500529  170932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:23:59.515264  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:23:59.528404  170932 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:23:59.528483  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:23:59.543183  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:23:59.555554  170932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:23:59.635371  170932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:23:59.716233  170932 docker.go:234] disabling docker service ...
	I1008 15:23:59.716295  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:23:59.730610  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:23:59.743097  170932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:23:59.823687  170932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:23:59.905402  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:23:59.918149  170932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:23:59.932053  170932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:23:59.932109  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.941582  170932 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:23:59.941641  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.951328  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.960338  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.969240  170932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:23:59.977804  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.986975  170932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.995767  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:24:00.004950  170932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:24:00.012696  170932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:24:00.020160  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.097921  170932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:24:00.199137  170932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:24:00.199212  170932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:24:00.203530  170932 start.go:563] Will wait 60s for crictl version
	I1008 15:24:00.203585  170932 ssh_runner.go:195] Run: which crictl
	I1008 15:24:00.207581  170932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:24:00.233465  170932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:24:00.233549  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.261379  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.291399  170932 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:24:00.292703  170932 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:24:00.309684  170932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:24:00.313961  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.324165  170932 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:24:00.324285  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:24:00.324335  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.356265  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.356286  170932 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:24:00.356332  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.382025  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.382049  170932 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:24:00.382057  170932 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:24:00.382151  170932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:24:00.382262  170932 ssh_runner.go:195] Run: crio config
	I1008 15:24:00.427970  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:24:00.427994  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:24:00.428012  170932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:24:00.428037  170932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:24:00.428148  170932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:24:00.428211  170932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:24:00.436556  170932 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:24:00.436625  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:24:00.444239  170932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:24:00.456696  170932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:24:00.469551  170932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:24:00.482344  170932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:24:00.486243  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.496323  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.583018  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:00.605888  170932 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:24:00.605921  170932 certs.go:195] generating shared ca certs ...
	I1008 15:24:00.605944  170932 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:00.606081  170932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:24:00.606165  170932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:24:00.606183  170932 certs.go:257] generating profile certs ...
	I1008 15:24:00.606303  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:24:00.606399  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:24:00.606474  170932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:24:00.606489  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:24:00.606509  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:24:00.606530  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:24:00.606548  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:24:00.606570  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:24:00.606589  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:24:00.606605  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:24:00.606624  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:24:00.606692  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:24:00.606854  170932 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:24:00.606878  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:24:00.606924  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:24:00.606963  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:24:00.607001  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:24:00.607090  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:24:00.607139  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.607164  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.607187  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.607847  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:24:00.628567  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:24:00.648277  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:24:00.668208  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:24:00.692981  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:24:00.711936  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:24:00.730180  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:24:00.748157  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:24:00.765418  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:24:00.783359  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:24:00.801263  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:24:00.820380  170932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:24:00.833023  170932 ssh_runner.go:195] Run: openssl version
	I1008 15:24:00.839109  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:24:00.847959  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851748  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851803  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.886598  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:24:00.895271  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:24:00.904050  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908310  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908374  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.942319  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:24:00.950674  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:24:00.959197  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963232  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963293  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.997976  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:24:01.006382  170932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:24:01.011246  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:24:01.045831  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:24:01.080738  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:24:01.117746  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:24:01.163545  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:24:01.200651  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:24:01.235623  170932 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:24:01.235701  170932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:24:01.235756  170932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:24:01.262838  170932 cri.go:89] found id: ""
	I1008 15:24:01.262915  170932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:24:01.270824  170932 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:24:01.270845  170932 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:24:01.270896  170932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:24:01.278158  170932 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:24:01.278608  170932 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.278724  170932 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:24:01.278982  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.279536  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.279976  170932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:24:01.279993  170932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:24:01.279999  170932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:24:01.280005  170932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:24:01.280012  170932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:24:01.280060  170932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:24:01.280394  170932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:24:01.288129  170932 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:24:01.288168  170932 kubeadm.go:601] duration metric: took 17.316144ms to restartPrimaryControlPlane
	I1008 15:24:01.288180  170932 kubeadm.go:402] duration metric: took 52.566594ms to StartCluster
	I1008 15:24:01.288201  170932 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.288273  170932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.288806  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.289031  170932 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:24:01.289197  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:24:01.289144  170932 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:24:01.289252  170932 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:24:01.289269  170932 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:24:01.289295  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.289295  170932 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:24:01.289366  170932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:24:01.289764  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.289770  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.292489  170932 out.go:179] * Verifying Kubernetes components...
	I1008 15:24:01.293798  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:01.310293  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.310655  170932 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:24:01.310703  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.311185  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.312731  170932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:24:01.314130  170932 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.314152  170932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:24:01.314200  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.338454  170932 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.338481  170932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:24:01.338539  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.340562  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.356940  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.398004  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:01.411760  170932 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:24:01.454106  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.466356  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:01.509002  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.509045  170932 retry.go:31] will retry after 350.610012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.520963  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.520999  170932 retry.go:31] will retry after 299.213164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.820559  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.860141  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:01.874556  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.874593  170932 retry.go:31] will retry after 266.164942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.914615  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.914645  170932 retry.go:31] will retry after 424.567426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.141023  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.194986  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.195025  170932 retry.go:31] will retry after 499.143477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.340348  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.393985  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.394031  170932 retry.go:31] will retry after 437.996301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.694684  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.750281  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.750313  170932 retry.go:31] will retry after 867.228296ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.832643  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.887793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.887828  170932 retry.go:31] will retry after 823.523521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:03.412577  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:03.617846  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:03.671770  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.671806  170932 retry.go:31] will retry after 1.456377841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.711980  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:03.765473  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.765505  170932 retry.go:31] will retry after 1.817640621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.128796  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:05.183743  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.183773  170932 retry.go:31] will retry after 2.265153126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.583676  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:05.637633  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.637664  170932 retry.go:31] will retry after 990.621367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:05.912406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:06.628981  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:06.685508  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:06.685550  170932 retry.go:31] will retry after 2.782570694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.449623  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:07.504065  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.504099  170932 retry.go:31] will retry after 3.741412594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:07.913335  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:09.469210  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:09.523862  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:09.523895  170932 retry.go:31] will retry after 5.181528653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:10.413099  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:11.245787  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:11.300714  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:11.300754  170932 retry.go:31] will retry after 3.449826104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:12.913103  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:14.705995  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:14.751595  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:14.762935  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.762977  170932 retry.go:31] will retry after 9.489237441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.806608  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.806638  170932 retry.go:31] will retry after 4.115281113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.913315  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:17.413350  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:18.922811  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:18.976958  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:18.976989  170932 retry.go:31] will retry after 5.239648896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:19.913368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:22.413029  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:24.216863  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:24.252645  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:24.273309  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.273340  170932 retry.go:31] will retry after 7.387859815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.310361  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.310404  170932 retry.go:31] will retry after 9.945221325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.913128  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:27.413070  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:29.413432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:31.662088  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:31.719810  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:31.719855  170932 retry.go:31] will retry after 13.420079077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:31.912559  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:33.913385  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:34.255764  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:34.312247  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:34.312278  170932 retry.go:31] will retry after 16.191125862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:36.413100  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:38.912942  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:41.412907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:43.913009  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:45.140914  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:45.198262  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:45.198294  170932 retry.go:31] will retry after 34.266392158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:45.913291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:48.412578  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:50.412878  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:50.504204  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:50.559793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:50.559828  170932 retry.go:31] will retry after 27.14173261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:52.413249  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:54.913400  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:57.412504  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:59.413163  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:01.913142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:04.413050  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:06.912907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:09.412961  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:11.912962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:14.412962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:16.912950  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:17.702652  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:17.758226  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:25:17.758259  170932 retry.go:31] will retry after 32.802414533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.412794  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:19.464923  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:25:19.521026  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.521181  170932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:25:21.912889  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:24.412929  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:26.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:28.913480  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:31.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:33.912646  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:36.412761  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:38.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:41.412956  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:46.412898  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:48.912819  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:50.561829  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:50.619960  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:50.620086  170932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:25:50.622296  170932 out.go:179] * Enabled addons: 
	I1008 15:25:50.623547  170932 addons.go:514] duration metric: took 1m49.334411127s for enable addons: enabled=[]
	W1008 15:25:50.912964  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:52.913239  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:55.413142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:57.912857  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:59.913308  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:02.412659  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:04.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:06.912502  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:09.412856  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:11.913398  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:14.413317  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:16.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:18.912361  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:20.912680  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:22.912778  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:24.913134  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:27.413083  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:29.912714  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:31.913049  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:34.412756  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:36.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:38.412909  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:40.912423  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:42.912843  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:45.412690  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:47.412867  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:49.413080  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:51.912848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:54.412994  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:56.413207  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:58.913394  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:01.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:03.912777  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:05.913168  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:07.913342  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:10.412475  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:12.412717  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:14.413066  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:16.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:18.912339  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:20.912432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:22.912695  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:24.913188  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:26.913438  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:29.412779  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:31.413129  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:33.413382  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:35.912652  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:37.912766  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:39.913252  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:41.913487  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:44.412715  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:46.912551  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:49.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:51.412877  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:53.413097  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:55.912620  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:58.412429  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:00.413171  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:02.912485  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:04.912746  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:07.412653  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:09.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:11.912560  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:14.412699  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:16.912516  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:19.412629  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:21.412991  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:23.413291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:25.912581  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:28.412380  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:30.412549  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:32.413358  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:34.912693  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:37.412624  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:39.412901  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:41.412960  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:43.413213  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:45.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:47.913336  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:50.412528  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:52.412731  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:54.412969  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:56.413193  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:58.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:00.912494  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:02.912626  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:04.912935  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:07.412741  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:09.412872  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:11.413111  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:13.413251  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:15.912635  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:18.412411  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:20.413378  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:22.913417  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:25.412543  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:27.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:29.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:31.913167  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:34.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:36.912368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:39.412698  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:41.412848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:45.912795  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:48.412572  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:50.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:52.412796  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:54.412926  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:56.912785  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:59.413204  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:30:01.412330  170932 node_ready.go:38] duration metric: took 6m0.000510744s for node "ha-430216" to be "Ready" ...
	I1008 15:30:01.414615  170932 out.go:203] 
	W1008 15:30:01.416405  170932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:30:01.416422  170932 out.go:285] * 
	* 
	W1008 15:30:01.418069  170932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:30:01.419605  170932 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:23:54.620410168Z",
	            "FinishedAt": "2025-10-08T15:23:53.312815942Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e2cf91d05cea12f49402d768350213ad8540946f78303d27396fc8da1227d8",
	            "SandboxKey": "/var/run/docker/netns/37e2cf91d05c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c2:30:1a:f8:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "04894192d1d8344b8201d455ae75ce4042eaf442fdd916a3acf3d43a6c3b54ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (316.122813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                               │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                                             │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │ 08 Oct 25 15:23 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:23:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:23:54.390098  170932 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:54.390354  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390364  170932 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:54.390369  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390587  170932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:54.391035  170932 out.go:368] Setting JSON to false
	I1008 15:23:54.391904  170932 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11185,"bootTime":1759925849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:23:54.392000  170932 start.go:141] virtualization: kvm guest
	I1008 15:23:54.394179  170932 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:23:54.395670  170932 notify.go:220] Checking for updates...
	I1008 15:23:54.395796  170932 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:23:54.397240  170932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:23:54.398569  170932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:23:54.399837  170932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:23:54.401102  170932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:23:54.402344  170932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:23:54.404021  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:54.404562  170932 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:23:54.427962  170932 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:23:54.428101  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.482745  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.472714788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.482901  170932 docker.go:318] overlay module found
	I1008 15:23:54.484784  170932 out.go:179] * Using the docker driver based on existing profile
	I1008 15:23:54.486099  170932 start.go:305] selected driver: docker
	I1008 15:23:54.486113  170932 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:23:54.486218  170932 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:23:54.486309  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.544832  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.535081224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.545438  170932 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:23:54.545485  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:23:54.545534  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:23:54.545577  170932 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:23:54.547619  170932 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:23:54.548799  170932 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:23:54.550084  170932 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:23:54.551306  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:23:54.551343  170932 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:23:54.551354  170932 cache.go:58] Caching tarball of preloaded images
	I1008 15:23:54.551396  170932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:23:54.551479  170932 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:23:54.551495  170932 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:23:54.551611  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.571805  170932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:23:54.571832  170932 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:23:54.571847  170932 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:23:54.571871  170932 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:23:54.571935  170932 start.go:364] duration metric: took 46.811µs to acquireMachinesLock for "ha-430216"
	I1008 15:23:54.571952  170932 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:23:54.571957  170932 fix.go:54] fixHost starting: 
	I1008 15:23:54.572177  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.590507  170932 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:23:54.590541  170932 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:23:54.592367  170932 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:23:54.592465  170932 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:23:54.836670  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.855000  170932 kic.go:430] container "ha-430216" state is running.
	I1008 15:23:54.855424  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:54.872582  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.872800  170932 machine.go:93] provisionDockerMachine start ...
	I1008 15:23:54.872862  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:54.890640  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:54.890934  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:54.890952  170932 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:23:54.891655  170932 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34990->127.0.0.1:32793: read: connection reset by peer
	I1008 15:23:58.039834  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.039875  170932 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:23:58.039947  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.058681  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.058904  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.058916  170932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:23:58.215272  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.215342  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.232894  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.233113  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.233130  170932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:23:58.379259  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:23:58.379290  170932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:23:58.379314  170932 ubuntu.go:190] setting up certificates
	I1008 15:23:58.379327  170932 provision.go:84] configureAuth start
	I1008 15:23:58.379406  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:58.396766  170932 provision.go:143] copyHostCerts
	I1008 15:23:58.396820  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396849  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:23:58.396858  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396924  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:23:58.397017  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397036  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:23:58.397043  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397070  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:23:58.397136  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397153  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:23:58.397159  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397183  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:23:58.397247  170932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:23:58.536180  170932 provision.go:177] copyRemoteCerts
	I1008 15:23:58.536249  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:23:58.536293  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.554351  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:58.657806  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:23:58.657871  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:23:58.675737  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:23:58.675790  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:23:58.692969  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:23:58.693030  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:23:58.710763  170932 provision.go:87] duration metric: took 331.416748ms to configureAuth
	I1008 15:23:58.710798  170932 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:23:58.711012  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:58.711117  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.728810  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.729089  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.729109  170932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:23:58.987429  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:23:58.987476  170932 machine.go:96] duration metric: took 4.114660829s to provisionDockerMachine
	I1008 15:23:58.987492  170932 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:23:58.987506  170932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:23:58.987579  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:23:58.987638  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.004627  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.108395  170932 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:23:59.111973  170932 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:23:59.111998  170932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:23:59.112007  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:23:59.112055  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:23:59.112144  170932 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:23:59.112167  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:23:59.112248  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:23:59.119933  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:23:59.137911  170932 start.go:296] duration metric: took 150.401166ms for postStartSetup
	I1008 15:23:59.137987  170932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:23:59.138020  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.155852  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.255756  170932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:23:59.260399  170932 fix.go:56] duration metric: took 4.688432219s for fixHost
	I1008 15:23:59.260429  170932 start.go:83] releasing machines lock for "ha-430216", held for 4.688483389s
	I1008 15:23:59.260521  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:59.277825  170932 ssh_runner.go:195] Run: cat /version.json
	I1008 15:23:59.277877  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.277923  170932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:23:59.278022  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.295429  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.296320  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.446135  170932 ssh_runner.go:195] Run: systemctl --version
	I1008 15:23:59.452641  170932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:23:59.487637  170932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:23:59.492434  170932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:23:59.492513  170932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:23:59.500423  170932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:23:59.500461  170932 start.go:495] detecting cgroup driver to use...
	I1008 15:23:59.500493  170932 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:23:59.500529  170932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:23:59.515264  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:23:59.528404  170932 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:23:59.528483  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:23:59.543183  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:23:59.555554  170932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:23:59.635371  170932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:23:59.716233  170932 docker.go:234] disabling docker service ...
	I1008 15:23:59.716295  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:23:59.730610  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:23:59.743097  170932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:23:59.823687  170932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:23:59.905402  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:23:59.918149  170932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:23:59.932053  170932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:23:59.932109  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.941582  170932 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:23:59.941641  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.951328  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.960338  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.969240  170932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:23:59.977804  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.986975  170932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.995767  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:24:00.004950  170932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:24:00.012696  170932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:24:00.020160  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.097921  170932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:24:00.199137  170932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:24:00.199212  170932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:24:00.203530  170932 start.go:563] Will wait 60s for crictl version
	I1008 15:24:00.203585  170932 ssh_runner.go:195] Run: which crictl
	I1008 15:24:00.207581  170932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:24:00.233465  170932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:24:00.233549  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.261379  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.291399  170932 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:24:00.292703  170932 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:24:00.309684  170932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:24:00.313961  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.324165  170932 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:24:00.324285  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:24:00.324335  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.356265  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.356286  170932 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:24:00.356332  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.382025  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.382049  170932 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:24:00.382057  170932 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:24:00.382151  170932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:24:00.382262  170932 ssh_runner.go:195] Run: crio config
	I1008 15:24:00.427970  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:24:00.427994  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:24:00.428012  170932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:24:00.428037  170932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:24:00.428148  170932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:24:00.428211  170932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:24:00.436556  170932 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:24:00.436625  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:24:00.444239  170932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:24:00.456696  170932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:24:00.469551  170932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:24:00.482344  170932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:24:00.486243  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.496323  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.583018  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:00.605888  170932 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:24:00.605921  170932 certs.go:195] generating shared ca certs ...
	I1008 15:24:00.605944  170932 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:00.606081  170932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:24:00.606165  170932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:24:00.606183  170932 certs.go:257] generating profile certs ...
	I1008 15:24:00.606303  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:24:00.606399  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:24:00.606474  170932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:24:00.606489  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:24:00.606509  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:24:00.606530  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:24:00.606548  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:24:00.606570  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:24:00.606589  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:24:00.606605  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:24:00.606624  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:24:00.606692  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:24:00.606854  170932 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:24:00.606878  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:24:00.606924  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:24:00.606963  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:24:00.607001  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:24:00.607090  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:24:00.607139  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.607164  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.607187  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.607847  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:24:00.628567  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:24:00.648277  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:24:00.668208  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:24:00.692981  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:24:00.711936  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:24:00.730180  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:24:00.748157  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:24:00.765418  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:24:00.783359  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:24:00.801263  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:24:00.820380  170932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:24:00.833023  170932 ssh_runner.go:195] Run: openssl version
	I1008 15:24:00.839109  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:24:00.847959  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851748  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851803  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.886598  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:24:00.895271  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:24:00.904050  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908310  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908374  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.942319  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:24:00.950674  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:24:00.959197  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963232  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963293  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.997976  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:24:01.006382  170932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:24:01.011246  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:24:01.045831  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:24:01.080738  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:24:01.117746  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:24:01.163545  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:24:01.200651  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:24:01.235623  170932 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:24:01.235701  170932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:24:01.235756  170932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:24:01.262838  170932 cri.go:89] found id: ""
	I1008 15:24:01.262915  170932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:24:01.270824  170932 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:24:01.270845  170932 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:24:01.270896  170932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:24:01.278158  170932 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:24:01.278608  170932 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.278724  170932 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:24:01.278982  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.279536  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.279976  170932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:24:01.279993  170932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:24:01.279999  170932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:24:01.280005  170932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:24:01.280012  170932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:24:01.280060  170932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:24:01.280394  170932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:24:01.288129  170932 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:24:01.288168  170932 kubeadm.go:601] duration metric: took 17.316144ms to restartPrimaryControlPlane
	I1008 15:24:01.288180  170932 kubeadm.go:402] duration metric: took 52.566594ms to StartCluster
	I1008 15:24:01.288201  170932 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.288273  170932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.288806  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.289031  170932 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:24:01.289197  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:24:01.289144  170932 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:24:01.289252  170932 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:24:01.289269  170932 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:24:01.289295  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.289295  170932 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:24:01.289366  170932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:24:01.289764  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.289770  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.292489  170932 out.go:179] * Verifying Kubernetes components...
	I1008 15:24:01.293798  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:01.310293  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.310655  170932 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:24:01.310703  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.311185  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.312731  170932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:24:01.314130  170932 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.314152  170932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:24:01.314200  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.338454  170932 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.338481  170932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:24:01.338539  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.340562  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.356940  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.398004  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:01.411760  170932 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:24:01.454106  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.466356  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:01.509002  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.509045  170932 retry.go:31] will retry after 350.610012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.520963  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.520999  170932 retry.go:31] will retry after 299.213164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.820559  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.860141  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:01.874556  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.874593  170932 retry.go:31] will retry after 266.164942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.914615  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.914645  170932 retry.go:31] will retry after 424.567426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.141023  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.194986  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.195025  170932 retry.go:31] will retry after 499.143477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.340348  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.393985  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.394031  170932 retry.go:31] will retry after 437.996301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.694684  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.750281  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.750313  170932 retry.go:31] will retry after 867.228296ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.832643  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.887793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.887828  170932 retry.go:31] will retry after 823.523521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:03.412577  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:03.617846  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:03.671770  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.671806  170932 retry.go:31] will retry after 1.456377841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.711980  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:03.765473  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.765505  170932 retry.go:31] will retry after 1.817640621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.128796  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:05.183743  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.183773  170932 retry.go:31] will retry after 2.265153126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.583676  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:05.637633  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.637664  170932 retry.go:31] will retry after 990.621367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:05.912406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:06.628981  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:06.685508  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:06.685550  170932 retry.go:31] will retry after 2.782570694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.449623  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:07.504065  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.504099  170932 retry.go:31] will retry after 3.741412594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:07.913335  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:09.469210  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:09.523862  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:09.523895  170932 retry.go:31] will retry after 5.181528653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:10.413099  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:11.245787  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:11.300714  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:11.300754  170932 retry.go:31] will retry after 3.449826104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:12.913103  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:14.705995  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:14.751595  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:14.762935  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.762977  170932 retry.go:31] will retry after 9.489237441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.806608  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.806638  170932 retry.go:31] will retry after 4.115281113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.913315  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:17.413350  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:18.922811  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:18.976958  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:18.976989  170932 retry.go:31] will retry after 5.239648896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:19.913368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:22.413029  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:24.216863  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:24.252645  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:24.273309  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.273340  170932 retry.go:31] will retry after 7.387859815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.310361  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.310404  170932 retry.go:31] will retry after 9.945221325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.913128  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:27.413070  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:29.413432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:31.662088  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:31.719810  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:31.719855  170932 retry.go:31] will retry after 13.420079077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:31.912559  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:33.913385  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:34.255764  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:34.312247  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:34.312278  170932 retry.go:31] will retry after 16.191125862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:36.413100  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:38.912942  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:41.412907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:43.913009  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:45.140914  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:45.198262  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:45.198294  170932 retry.go:31] will retry after 34.266392158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:45.913291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:48.412578  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:50.412878  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:50.504204  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:50.559793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:50.559828  170932 retry.go:31] will retry after 27.14173261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:52.413249  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:54.913400  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:57.412504  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:59.413163  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:01.913142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:04.413050  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:06.912907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:09.412961  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:11.912962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:14.412962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:16.912950  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:17.702652  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:17.758226  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:25:17.758259  170932 retry.go:31] will retry after 32.802414533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.412794  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:19.464923  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:25:19.521026  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.521181  170932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:25:21.912889  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:24.412929  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:26.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:28.913480  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:31.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:33.912646  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:36.412761  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:38.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:41.412956  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:46.412898  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:48.912819  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:50.561829  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:50.619960  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:50.620086  170932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:25:50.622296  170932 out.go:179] * Enabled addons: 
	I1008 15:25:50.623547  170932 addons.go:514] duration metric: took 1m49.334411127s for enable addons: enabled=[]
	W1008 15:25:50.912964  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:52.913239  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:55.413142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:57.912857  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:59.913308  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:02.412659  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:04.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:06.912502  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:09.412856  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:11.913398  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:14.413317  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:16.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:18.912361  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:20.912680  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:22.912778  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:24.913134  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:27.413083  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:29.912714  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:31.913049  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:34.412756  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:36.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:38.412909  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:40.912423  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:42.912843  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:45.412690  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:47.412867  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:49.413080  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:51.912848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:54.412994  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:56.413207  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:58.913394  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:01.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:03.912777  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:05.913168  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:07.913342  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:10.412475  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:12.412717  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:14.413066  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:16.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:18.912339  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:20.912432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:22.912695  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:24.913188  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:26.913438  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:29.412779  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:31.413129  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:33.413382  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:35.912652  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:37.912766  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:39.913252  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:41.913487  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:44.412715  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:46.912551  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:49.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:51.412877  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:53.413097  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:55.912620  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:58.412429  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:00.413171  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:02.912485  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:04.912746  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:07.412653  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:09.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:11.912560  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:14.412699  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:16.912516  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:19.412629  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:21.412991  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:23.413291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:25.912581  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:28.412380  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:30.412549  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:32.413358  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:34.912693  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:37.412624  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:39.412901  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:41.412960  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:43.413213  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:45.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:47.913336  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:50.412528  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:52.412731  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:54.412969  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:56.413193  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:58.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:00.912494  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:02.912626  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:04.912935  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:07.412741  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:09.412872  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:11.413111  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:13.413251  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:15.912635  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:18.412411  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:20.413378  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:22.913417  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:25.412543  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:27.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:29.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:31.913167  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:34.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:36.912368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:39.412698  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:41.412848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:45.912795  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:48.412572  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:50.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:52.412796  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:54.412926  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:56.912785  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:59.413204  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:30:01.412330  170932 node_ready.go:38] duration metric: took 6m0.000510744s for node "ha-430216" to be "Ready" ...
	I1008 15:30:01.414615  170932 out.go:203] 
	W1008 15:30:01.416405  170932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:30:01.416422  170932 out.go:285] * 
	W1008 15:30:01.418069  170932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:30:01.419605  170932 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:29:52 ha-430216 crio[519]: time="2025-10-08T15:29:52.728116555Z" level=info msg="createCtr: removing container 14c77d89d7ad28213bddedfb90a5153a55d53973634229871da8af1ea6d12d40" id=871c5fef-92df-405a-8e39-f8060f2ef1f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:52 ha-430216 crio[519]: time="2025-10-08T15:29:52.728150127Z" level=info msg="createCtr: deleting container 14c77d89d7ad28213bddedfb90a5153a55d53973634229871da8af1ea6d12d40 from storage" id=871c5fef-92df-405a-8e39-f8060f2ef1f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:52 ha-430216 crio[519]: time="2025-10-08T15:29:52.730366711Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=871c5fef-92df-405a-8e39-f8060f2ef1f6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.700134118Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=be21ba0e-8444-4600-9f47-f72f29bd34d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.701160286Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=70687838-9859-46d9-a91f-9f853e6bf249 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.702096203Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.702325622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.705612737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.706231964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.72300056Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.724434627Z" level=info msg="createCtr: deleting container ID 8df2852cb53f4851ec1a2e8db101a67be770a12aa8cf455e49d5ead476d8784e from idIndex" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.724488957Z" level=info msg="createCtr: removing container 8df2852cb53f4851ec1a2e8db101a67be770a12aa8cf455e49d5ead476d8784e" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.724522923Z" level=info msg="createCtr: deleting container 8df2852cb53f4851ec1a2e8db101a67be770a12aa8cf455e49d5ead476d8784e from storage" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.726625792Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.700231797Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5d8e97a2-6c6b-4886-bb78-cc95cd425e54 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.701260049Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=73f9077e-e23e-40ce-bcd2-e5d3d9264a8f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.702289864Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.702583221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.707097557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.707568794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.722897524Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724361764Z" level=info msg="createCtr: deleting container ID 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a from idIndex" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724404802Z" level=info msg="createCtr: removing container 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724438929Z" level=info msg="createCtr: deleting container 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a from storage" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.726577941Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:30:02.374116    1992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:02.374715    1992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:02.376320    1992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:02.376742    1992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:02.378007    1992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:30:02 up  3:12,  0 user,  load average: 0.10, 0.05, 0.09
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:29:52 ha-430216 kubelet[670]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:29:52 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:29:52 ha-430216 kubelet[670]: E1008 15:29:52.730818     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:29:53 ha-430216 kubelet[670]: E1008 15:29:53.699637     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:29:53 ha-430216 kubelet[670]: E1008 15:29:53.726911     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:29:53 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:29:53 ha-430216 kubelet[670]:  > podSandboxID="0ab57d9a2591adf8ab95f95fc92256df7785fedbd6767cf8e3bf4f53e2281c5b"
	Oct 08 15:29:53 ha-430216 kubelet[670]: E1008 15:29:53.727015     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:29:53 ha-430216 kubelet[670]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:29:53 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:29:53 ha-430216 kubelet[670]: E1008 15:29:53.727055     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:29:56 ha-430216 kubelet[670]: E1008 15:29:56.337957     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:29:56 ha-430216 kubelet[670]: I1008 15:29:56.513722     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:29:56 ha-430216 kubelet[670]: E1008 15:29:56.514162     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:29:56 ha-430216 kubelet[670]: E1008 15:29:56.830035     670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8d69f9632b12  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:24:00.690129682 +0000 UTC m=+0.082375825,LastTimestamp:2025-10-08 15:24:00.690129682 +0000 UTC m=+0.082375825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:29:58 ha-430216 kubelet[670]: E1008 15:29:58.255288     670 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:30:00 ha-430216 kubelet[670]: E1008 15:30:00.715624     670 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.699758     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.726922     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:01 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:01 ha-430216 kubelet[670]:  > podSandboxID="5c80f4ae55b7598a9f6005fa50b7a289882f53a70f50aae01efaa01d217c1484"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.727060     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:01 ha-430216 kubelet[670]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:01 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.727105     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (302.448865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-430216" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:23:54.620410168Z",
	            "FinishedAt": "2025-10-08T15:23:53.312815942Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e2cf91d05cea12f49402d768350213ad8540946f78303d27396fc8da1227d8",
	            "SandboxKey": "/var/run/docker/netns/37e2cf91d05c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c2:30:1a:f8:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "04894192d1d8344b8201d455ae75ce4042eaf442fdd916a3acf3d43a6c3b54ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (296.981529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                               │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                                             │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │ 08 Oct 25 15:23 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:23:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:23:54.390098  170932 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:54.390354  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390364  170932 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:54.390369  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390587  170932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:54.391035  170932 out.go:368] Setting JSON to false
	I1008 15:23:54.391904  170932 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11185,"bootTime":1759925849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:23:54.392000  170932 start.go:141] virtualization: kvm guest
	I1008 15:23:54.394179  170932 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:23:54.395670  170932 notify.go:220] Checking for updates...
	I1008 15:23:54.395796  170932 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:23:54.397240  170932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:23:54.398569  170932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:23:54.399837  170932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:23:54.401102  170932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:23:54.402344  170932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:23:54.404021  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:54.404562  170932 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:23:54.427962  170932 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:23:54.428101  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.482745  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.472714788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.482901  170932 docker.go:318] overlay module found
	I1008 15:23:54.484784  170932 out.go:179] * Using the docker driver based on existing profile
	I1008 15:23:54.486099  170932 start.go:305] selected driver: docker
	I1008 15:23:54.486113  170932 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:23:54.486218  170932 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:23:54.486309  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.544832  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.535081224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.545438  170932 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:23:54.545485  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:23:54.545534  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:23:54.545577  170932 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:23:54.547619  170932 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:23:54.548799  170932 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:23:54.550084  170932 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:23:54.551306  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:23:54.551343  170932 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:23:54.551354  170932 cache.go:58] Caching tarball of preloaded images
	I1008 15:23:54.551396  170932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:23:54.551479  170932 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:23:54.551495  170932 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:23:54.551611  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.571805  170932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:23:54.571832  170932 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:23:54.571847  170932 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:23:54.571871  170932 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:23:54.571935  170932 start.go:364] duration metric: took 46.811µs to acquireMachinesLock for "ha-430216"
	I1008 15:23:54.571952  170932 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:23:54.571957  170932 fix.go:54] fixHost starting: 
	I1008 15:23:54.572177  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.590507  170932 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:23:54.590541  170932 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:23:54.592367  170932 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:23:54.592465  170932 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:23:54.836670  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.855000  170932 kic.go:430] container "ha-430216" state is running.
	I1008 15:23:54.855424  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:54.872582  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.872800  170932 machine.go:93] provisionDockerMachine start ...
	I1008 15:23:54.872862  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:54.890640  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:54.890934  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:54.890952  170932 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:23:54.891655  170932 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34990->127.0.0.1:32793: read: connection reset by peer
	I1008 15:23:58.039834  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.039875  170932 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:23:58.039947  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.058681  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.058904  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.058916  170932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:23:58.215272  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.215342  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.232894  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.233113  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.233130  170932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:23:58.379259  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:23:58.379290  170932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:23:58.379314  170932 ubuntu.go:190] setting up certificates
	I1008 15:23:58.379327  170932 provision.go:84] configureAuth start
	I1008 15:23:58.379406  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:58.396766  170932 provision.go:143] copyHostCerts
	I1008 15:23:58.396820  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396849  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:23:58.396858  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396924  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:23:58.397017  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397036  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:23:58.397043  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397070  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:23:58.397136  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397153  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:23:58.397159  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397183  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:23:58.397247  170932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:23:58.536180  170932 provision.go:177] copyRemoteCerts
	I1008 15:23:58.536249  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:23:58.536293  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.554351  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:58.657806  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:23:58.657871  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:23:58.675737  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:23:58.675790  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:23:58.692969  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:23:58.693030  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:23:58.710763  170932 provision.go:87] duration metric: took 331.416748ms to configureAuth
	I1008 15:23:58.710798  170932 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:23:58.711012  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:58.711117  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.728810  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.729089  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.729109  170932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:23:58.987429  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:23:58.987476  170932 machine.go:96] duration metric: took 4.114660829s to provisionDockerMachine
	I1008 15:23:58.987492  170932 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:23:58.987506  170932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:23:58.987579  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:23:58.987638  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.004627  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.108395  170932 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:23:59.111973  170932 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:23:59.111998  170932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:23:59.112007  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:23:59.112055  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:23:59.112144  170932 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:23:59.112167  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:23:59.112248  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:23:59.119933  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:23:59.137911  170932 start.go:296] duration metric: took 150.401166ms for postStartSetup
	I1008 15:23:59.137987  170932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:23:59.138020  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.155852  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.255756  170932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:23:59.260399  170932 fix.go:56] duration metric: took 4.688432219s for fixHost
	I1008 15:23:59.260429  170932 start.go:83] releasing machines lock for "ha-430216", held for 4.688483389s
	I1008 15:23:59.260521  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:59.277825  170932 ssh_runner.go:195] Run: cat /version.json
	I1008 15:23:59.277877  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.277923  170932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:23:59.278022  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.295429  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.296320  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.446135  170932 ssh_runner.go:195] Run: systemctl --version
	I1008 15:23:59.452641  170932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:23:59.487637  170932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:23:59.492434  170932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:23:59.492513  170932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:23:59.500423  170932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:23:59.500461  170932 start.go:495] detecting cgroup driver to use...
	I1008 15:23:59.500493  170932 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:23:59.500529  170932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:23:59.515264  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:23:59.528404  170932 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:23:59.528483  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:23:59.543183  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:23:59.555554  170932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:23:59.635371  170932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:23:59.716233  170932 docker.go:234] disabling docker service ...
	I1008 15:23:59.716295  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:23:59.730610  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:23:59.743097  170932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:23:59.823687  170932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:23:59.905402  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:23:59.918149  170932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:23:59.932053  170932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:23:59.932109  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.941582  170932 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:23:59.941641  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.951328  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.960338  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.969240  170932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:23:59.977804  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.986975  170932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.995767  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:24:00.004950  170932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:24:00.012696  170932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:24:00.020160  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.097921  170932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:24:00.199137  170932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:24:00.199212  170932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:24:00.203530  170932 start.go:563] Will wait 60s for crictl version
	I1008 15:24:00.203585  170932 ssh_runner.go:195] Run: which crictl
	I1008 15:24:00.207581  170932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:24:00.233465  170932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:24:00.233549  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.261379  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.291399  170932 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:24:00.292703  170932 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:24:00.309684  170932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:24:00.313961  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.324165  170932 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:24:00.324285  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:24:00.324335  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.356265  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.356286  170932 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:24:00.356332  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.382025  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.382049  170932 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:24:00.382057  170932 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:24:00.382151  170932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:24:00.382262  170932 ssh_runner.go:195] Run: crio config
	I1008 15:24:00.427970  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:24:00.427994  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:24:00.428012  170932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:24:00.428037  170932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:24:00.428148  170932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:24:00.428211  170932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:24:00.436556  170932 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:24:00.436625  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:24:00.444239  170932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:24:00.456696  170932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:24:00.469551  170932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:24:00.482344  170932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:24:00.486243  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.496323  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.583018  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:00.605888  170932 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:24:00.605921  170932 certs.go:195] generating shared ca certs ...
	I1008 15:24:00.605944  170932 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:00.606081  170932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:24:00.606165  170932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:24:00.606183  170932 certs.go:257] generating profile certs ...
	I1008 15:24:00.606303  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:24:00.606399  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:24:00.606474  170932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:24:00.606489  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:24:00.606509  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:24:00.606530  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:24:00.606548  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:24:00.606570  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:24:00.606589  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:24:00.606605  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:24:00.606624  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:24:00.606692  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:24:00.606854  170932 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:24:00.606878  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:24:00.606924  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:24:00.606963  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:24:00.607001  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:24:00.607090  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:24:00.607139  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.607164  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.607187  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.607847  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:24:00.628567  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:24:00.648277  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:24:00.668208  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:24:00.692981  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:24:00.711936  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:24:00.730180  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:24:00.748157  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:24:00.765418  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:24:00.783359  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:24:00.801263  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:24:00.820380  170932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:24:00.833023  170932 ssh_runner.go:195] Run: openssl version
	I1008 15:24:00.839109  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:24:00.847959  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851748  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851803  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.886598  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:24:00.895271  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:24:00.904050  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908310  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908374  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.942319  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:24:00.950674  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:24:00.959197  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963232  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963293  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.997976  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:24:01.006382  170932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:24:01.011246  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:24:01.045831  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:24:01.080738  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:24:01.117746  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:24:01.163545  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:24:01.200651  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:24:01.235623  170932 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:24:01.235701  170932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:24:01.235756  170932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:24:01.262838  170932 cri.go:89] found id: ""
	I1008 15:24:01.262915  170932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:24:01.270824  170932 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:24:01.270845  170932 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:24:01.270896  170932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:24:01.278158  170932 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:24:01.278608  170932 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.278724  170932 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:24:01.278982  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.279536  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.279976  170932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:24:01.279993  170932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:24:01.279999  170932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:24:01.280005  170932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:24:01.280012  170932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:24:01.280060  170932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:24:01.280394  170932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:24:01.288129  170932 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:24:01.288168  170932 kubeadm.go:601] duration metric: took 17.316144ms to restartPrimaryControlPlane
	I1008 15:24:01.288180  170932 kubeadm.go:402] duration metric: took 52.566594ms to StartCluster
	I1008 15:24:01.288201  170932 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.288273  170932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.288806  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.289031  170932 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:24:01.289197  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:24:01.289144  170932 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:24:01.289252  170932 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:24:01.289269  170932 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:24:01.289295  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.289295  170932 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:24:01.289366  170932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:24:01.289764  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.289770  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.292489  170932 out.go:179] * Verifying Kubernetes components...
	I1008 15:24:01.293798  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:01.310293  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.310655  170932 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:24:01.310703  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.311185  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.312731  170932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:24:01.314130  170932 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.314152  170932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:24:01.314200  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.338454  170932 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.338481  170932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:24:01.338539  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.340562  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.356940  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.398004  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:01.411760  170932 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:24:01.454106  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.466356  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:01.509002  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.509045  170932 retry.go:31] will retry after 350.610012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.520963  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.520999  170932 retry.go:31] will retry after 299.213164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.820559  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.860141  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:01.874556  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.874593  170932 retry.go:31] will retry after 266.164942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.914615  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.914645  170932 retry.go:31] will retry after 424.567426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.141023  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.194986  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.195025  170932 retry.go:31] will retry after 499.143477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.340348  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.393985  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.394031  170932 retry.go:31] will retry after 437.996301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.694684  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.750281  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.750313  170932 retry.go:31] will retry after 867.228296ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.832643  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.887793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.887828  170932 retry.go:31] will retry after 823.523521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:03.412577  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:03.617846  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:03.671770  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.671806  170932 retry.go:31] will retry after 1.456377841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.711980  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:03.765473  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.765505  170932 retry.go:31] will retry after 1.817640621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.128796  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:05.183743  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.183773  170932 retry.go:31] will retry after 2.265153126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.583676  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:05.637633  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.637664  170932 retry.go:31] will retry after 990.621367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:05.912406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:06.628981  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:06.685508  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:06.685550  170932 retry.go:31] will retry after 2.782570694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.449623  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:07.504065  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.504099  170932 retry.go:31] will retry after 3.741412594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:07.913335  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:09.469210  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:09.523862  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:09.523895  170932 retry.go:31] will retry after 5.181528653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:10.413099  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:11.245787  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:11.300714  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:11.300754  170932 retry.go:31] will retry after 3.449826104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:12.913103  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:14.705995  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:14.751595  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:14.762935  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.762977  170932 retry.go:31] will retry after 9.489237441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.806608  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.806638  170932 retry.go:31] will retry after 4.115281113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.913315  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:17.413350  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:18.922811  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:18.976958  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:18.976989  170932 retry.go:31] will retry after 5.239648896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:19.913368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:22.413029  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:24.216863  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:24.252645  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:24.273309  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.273340  170932 retry.go:31] will retry after 7.387859815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.310361  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.310404  170932 retry.go:31] will retry after 9.945221325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.913128  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:27.413070  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:29.413432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:31.662088  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:31.719810  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:31.719855  170932 retry.go:31] will retry after 13.420079077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:31.912559  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:33.913385  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:34.255764  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:34.312247  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:34.312278  170932 retry.go:31] will retry after 16.191125862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:36.413100  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:38.912942  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:41.412907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:43.913009  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:45.140914  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:45.198262  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:45.198294  170932 retry.go:31] will retry after 34.266392158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:45.913291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:48.412578  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:50.412878  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:50.504204  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:50.559793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:50.559828  170932 retry.go:31] will retry after 27.14173261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:52.413249  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:54.913400  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:57.412504  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:59.413163  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:01.913142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:04.413050  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:06.912907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:09.412961  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:11.912962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:14.412962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:16.912950  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:17.702652  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:17.758226  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:25:17.758259  170932 retry.go:31] will retry after 32.802414533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.412794  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:19.464923  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:25:19.521026  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.521181  170932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:25:21.912889  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:24.412929  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:26.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:28.913480  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:31.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:33.912646  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:36.412761  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:38.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:41.412956  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:46.412898  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:48.912819  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:50.561829  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:50.619960  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:50.620086  170932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:25:50.622296  170932 out.go:179] * Enabled addons: 
	I1008 15:25:50.623547  170932 addons.go:514] duration metric: took 1m49.334411127s for enable addons: enabled=[]
	W1008 15:25:50.912964  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:52.913239  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:55.413142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:57.912857  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:59.913308  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:02.412659  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:04.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:06.912502  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:09.412856  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:11.913398  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:14.413317  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:16.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:18.912361  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:20.912680  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:22.912778  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:24.913134  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:27.413083  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:29.912714  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:31.913049  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:34.412756  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:36.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:38.412909  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:40.912423  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:42.912843  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:45.412690  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:47.412867  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:49.413080  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:51.912848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:54.412994  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:56.413207  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:58.913394  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:01.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:03.912777  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:05.913168  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:07.913342  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:10.412475  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:12.412717  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:14.413066  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:16.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:18.912339  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:20.912432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:22.912695  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:24.913188  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:26.913438  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:29.412779  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:31.413129  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:33.413382  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:35.912652  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:37.912766  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:39.913252  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:41.913487  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:44.412715  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:46.912551  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:49.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:51.412877  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:53.413097  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:55.912620  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:58.412429  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:00.413171  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:02.912485  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:04.912746  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:07.412653  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:09.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:11.912560  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:14.412699  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:16.912516  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:19.412629  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:21.412991  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:23.413291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:25.912581  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:28.412380  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:30.412549  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:32.413358  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:34.912693  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:37.412624  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:39.412901  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:41.412960  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:43.413213  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:45.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:47.913336  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:50.412528  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:52.412731  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:54.412969  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:56.413193  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:58.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:00.912494  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:02.912626  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:04.912935  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:07.412741  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:09.412872  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:11.413111  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:13.413251  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:15.912635  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:18.412411  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:20.413378  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:22.913417  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:25.412543  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:27.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:29.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:31.913167  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:34.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:36.912368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:39.412698  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:41.412848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:45.912795  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:48.412572  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:50.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:52.412796  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:54.412926  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:56.912785  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:59.413204  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:30:01.412330  170932 node_ready.go:38] duration metric: took 6m0.000510744s for node "ha-430216" to be "Ready" ...
	I1008 15:30:01.414615  170932 out.go:203] 
	W1008 15:30:01.416405  170932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:30:01.416422  170932 out.go:285] * 
	W1008 15:30:01.418069  170932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:30:01.419605  170932 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.724488957Z" level=info msg="createCtr: removing container 8df2852cb53f4851ec1a2e8db101a67be770a12aa8cf455e49d5ead476d8784e" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.724522923Z" level=info msg="createCtr: deleting container 8df2852cb53f4851ec1a2e8db101a67be770a12aa8cf455e49d5ead476d8784e from storage" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:29:53 ha-430216 crio[519]: time="2025-10-08T15:29:53.726625792Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=e80cff82-e3c3-4ebb-8b69-7a26a11b5600 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.700231797Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5d8e97a2-6c6b-4886-bb78-cc95cd425e54 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.701260049Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=73f9077e-e23e-40ce-bcd2-e5d3d9264a8f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.702289864Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-430216/kube-controller-manager" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.702583221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.707097557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.707568794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.722897524Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724361764Z" level=info msg="createCtr: deleting container ID 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a from idIndex" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724404802Z" level=info msg="createCtr: removing container 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724438929Z" level=info msg="createCtr: deleting container 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a from storage" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.726577941Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.700615715Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=45b81ccf-d6c8-4de3-84dd-efe071a87cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.701722913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3ed643de-8ddf-439b-b4aa-a0ce1ca1f5ae name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.702689735Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-430216/kube-apiserver" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.70294592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.706311293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.706971842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.72179926Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723597965Z" level=info msg="createCtr: deleting container ID 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a from idIndex" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723635538Z" level=info msg="createCtr: removing container 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723675823Z" level=info msg="createCtr: deleting container 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a from storage" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.726022502Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:30:03.995566    2165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:03.996169    2165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:03.997761    2165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:03.998198    2165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:03.999808    2165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:30:04 up  3:12,  0 user,  load average: 0.10, 0.05, 0.09
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:29:56 ha-430216 kubelet[670]: E1008 15:29:56.337957     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:29:56 ha-430216 kubelet[670]: I1008 15:29:56.513722     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:29:56 ha-430216 kubelet[670]: E1008 15:29:56.514162     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:29:56 ha-430216 kubelet[670]: E1008 15:29:56.830035     670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8d69f9632b12  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:24:00.690129682 +0000 UTC m=+0.082375825,LastTimestamp:2025-10-08 15:24:00.690129682 +0000 UTC m=+0.082375825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	Oct 08 15:29:58 ha-430216 kubelet[670]: E1008 15:29:58.255288     670 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 15:30:00 ha-430216 kubelet[670]: E1008 15:30:00.715624     670 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-430216\" not found"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.699758     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.726922     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:01 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:01 ha-430216 kubelet[670]:  > podSandboxID="5c80f4ae55b7598a9f6005fa50b7a289882f53a70f50aae01efaa01d217c1484"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.727060     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:01 ha-430216 kubelet[670]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:01 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.727105     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.338751     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:30:03 ha-430216 kubelet[670]: I1008 15:30:03.516191     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.516627     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.700053     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726335     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:03 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:03 ha-430216 kubelet[670]:  > podSandboxID="cc8245443c9f91f95c26bb18bd6337d82c83563ee0fa5ff837081576219a847b"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726439     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:03 ha-430216 kubelet[670]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:03 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726492     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (299.596005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430216 node add --control-plane --alsologtostderr -v 5: exit status 103 (258.201118ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-430216 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-430216"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:30:04.443834  175572 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:30:04.444072  175572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:30:04.444080  175572 out.go:374] Setting ErrFile to fd 2...
	I1008 15:30:04.444084  175572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:30:04.444275  175572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:30:04.444588  175572 mustload.go:65] Loading cluster: ha-430216
	I1008 15:30:04.444953  175572 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:30:04.445305  175572 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:30:04.462628  175572 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:30:04.462907  175572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:30:04.523123  175572 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:30:04.512695151 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:30:04.523241  175572 api_server.go:166] Checking apiserver status ...
	I1008 15:30:04.523286  175572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:30:04.523318  175572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:30:04.540820  175572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	W1008 15:30:04.648250  175572 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:30:04.650686  175572 out.go:179] * The control-plane node ha-430216 apiserver is not running: (state=Stopped)
	I1008 15:30:04.652536  175572 out.go:179]   To start a cluster, run: "minikube start -p ha-430216"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-430216 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:23:54.620410168Z",
	            "FinishedAt": "2025-10-08T15:23:53.312815942Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e2cf91d05cea12f49402d768350213ad8540946f78303d27396fc8da1227d8",
	            "SandboxKey": "/var/run/docker/netns/37e2cf91d05c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c2:30:1a:f8:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "04894192d1d8344b8201d455ae75ce4042eaf442fdd916a3acf3d43a6c3b54ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (305.455685ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                               │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                                             │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │ 08 Oct 25 15:23 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node add --control-plane --alsologtostderr -v 5                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:23:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:23:54.390098  170932 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:54.390354  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390364  170932 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:54.390369  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390587  170932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:54.391035  170932 out.go:368] Setting JSON to false
	I1008 15:23:54.391904  170932 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11185,"bootTime":1759925849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:23:54.392000  170932 start.go:141] virtualization: kvm guest
	I1008 15:23:54.394179  170932 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:23:54.395670  170932 notify.go:220] Checking for updates...
	I1008 15:23:54.395796  170932 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:23:54.397240  170932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:23:54.398569  170932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:23:54.399837  170932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:23:54.401102  170932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:23:54.402344  170932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:23:54.404021  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:54.404562  170932 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:23:54.427962  170932 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:23:54.428101  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.482745  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.472714788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.482901  170932 docker.go:318] overlay module found
	I1008 15:23:54.484784  170932 out.go:179] * Using the docker driver based on existing profile
	I1008 15:23:54.486099  170932 start.go:305] selected driver: docker
	I1008 15:23:54.486113  170932 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:23:54.486218  170932 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:23:54.486309  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.544832  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.535081224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.545438  170932 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:23:54.545485  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:23:54.545534  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:23:54.545577  170932 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:23:54.547619  170932 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:23:54.548799  170932 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:23:54.550084  170932 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:23:54.551306  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:23:54.551343  170932 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:23:54.551354  170932 cache.go:58] Caching tarball of preloaded images
	I1008 15:23:54.551396  170932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:23:54.551479  170932 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:23:54.551495  170932 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:23:54.551611  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.571805  170932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:23:54.571832  170932 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:23:54.571847  170932 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:23:54.571871  170932 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:23:54.571935  170932 start.go:364] duration metric: took 46.811µs to acquireMachinesLock for "ha-430216"
	I1008 15:23:54.571952  170932 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:23:54.571957  170932 fix.go:54] fixHost starting: 
	I1008 15:23:54.572177  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.590507  170932 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:23:54.590541  170932 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:23:54.592367  170932 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:23:54.592465  170932 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:23:54.836670  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.855000  170932 kic.go:430] container "ha-430216" state is running.
	I1008 15:23:54.855424  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:54.872582  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.872800  170932 machine.go:93] provisionDockerMachine start ...
	I1008 15:23:54.872862  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:54.890640  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:54.890934  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:54.890952  170932 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:23:54.891655  170932 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34990->127.0.0.1:32793: read: connection reset by peer
	I1008 15:23:58.039834  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.039875  170932 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:23:58.039947  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.058681  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.058904  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.058916  170932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:23:58.215272  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.215342  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.232894  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.233113  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.233130  170932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:23:58.379259  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:23:58.379290  170932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:23:58.379314  170932 ubuntu.go:190] setting up certificates
	I1008 15:23:58.379327  170932 provision.go:84] configureAuth start
	I1008 15:23:58.379406  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:58.396766  170932 provision.go:143] copyHostCerts
	I1008 15:23:58.396820  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396849  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:23:58.396858  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396924  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:23:58.397017  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397036  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:23:58.397043  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397070  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:23:58.397136  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397153  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:23:58.397159  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397183  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:23:58.397247  170932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:23:58.536180  170932 provision.go:177] copyRemoteCerts
	I1008 15:23:58.536249  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:23:58.536293  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.554351  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:58.657806  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:23:58.657871  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:23:58.675737  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:23:58.675790  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:23:58.692969  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:23:58.693030  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:23:58.710763  170932 provision.go:87] duration metric: took 331.416748ms to configureAuth
	I1008 15:23:58.710798  170932 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:23:58.711012  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:58.711117  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.728810  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.729089  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.729109  170932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:23:58.987429  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:23:58.987476  170932 machine.go:96] duration metric: took 4.114660829s to provisionDockerMachine
	I1008 15:23:58.987492  170932 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:23:58.987506  170932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:23:58.987579  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:23:58.987638  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.004627  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.108395  170932 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:23:59.111973  170932 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:23:59.111998  170932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:23:59.112007  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:23:59.112055  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:23:59.112144  170932 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:23:59.112167  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:23:59.112248  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:23:59.119933  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:23:59.137911  170932 start.go:296] duration metric: took 150.401166ms for postStartSetup
	I1008 15:23:59.137987  170932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:23:59.138020  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.155852  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.255756  170932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:23:59.260399  170932 fix.go:56] duration metric: took 4.688432219s for fixHost
	I1008 15:23:59.260429  170932 start.go:83] releasing machines lock for "ha-430216", held for 4.688483389s
	I1008 15:23:59.260521  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:59.277825  170932 ssh_runner.go:195] Run: cat /version.json
	I1008 15:23:59.277877  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.277923  170932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:23:59.278022  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.295429  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.296320  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.446135  170932 ssh_runner.go:195] Run: systemctl --version
	I1008 15:23:59.452641  170932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:23:59.487637  170932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:23:59.492434  170932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:23:59.492513  170932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:23:59.500423  170932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:23:59.500461  170932 start.go:495] detecting cgroup driver to use...
	I1008 15:23:59.500493  170932 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:23:59.500529  170932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:23:59.515264  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:23:59.528404  170932 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:23:59.528483  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:23:59.543183  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:23:59.555554  170932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:23:59.635371  170932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:23:59.716233  170932 docker.go:234] disabling docker service ...
	I1008 15:23:59.716295  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:23:59.730610  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:23:59.743097  170932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:23:59.823687  170932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:23:59.905402  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:23:59.918149  170932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:23:59.932053  170932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:23:59.932109  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.941582  170932 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:23:59.941641  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.951328  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.960338  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.969240  170932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:23:59.977804  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.986975  170932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.995767  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:24:00.004950  170932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:24:00.012696  170932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:24:00.020160  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.097921  170932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:24:00.199137  170932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:24:00.199212  170932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:24:00.203530  170932 start.go:563] Will wait 60s for crictl version
	I1008 15:24:00.203585  170932 ssh_runner.go:195] Run: which crictl
	I1008 15:24:00.207581  170932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:24:00.233465  170932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:24:00.233549  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.261379  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.291399  170932 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:24:00.292703  170932 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:24:00.309684  170932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:24:00.313961  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.324165  170932 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:24:00.324285  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:24:00.324335  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.356265  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.356286  170932 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:24:00.356332  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.382025  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.382049  170932 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:24:00.382057  170932 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:24:00.382151  170932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:24:00.382262  170932 ssh_runner.go:195] Run: crio config
	I1008 15:24:00.427970  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:24:00.427994  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:24:00.428012  170932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:24:00.428037  170932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:24:00.428148  170932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:24:00.428211  170932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:24:00.436556  170932 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:24:00.436625  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:24:00.444239  170932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:24:00.456696  170932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:24:00.469551  170932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:24:00.482344  170932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:24:00.486243  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.496323  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.583018  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:00.605888  170932 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:24:00.605921  170932 certs.go:195] generating shared ca certs ...
	I1008 15:24:00.605944  170932 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:00.606081  170932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:24:00.606165  170932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:24:00.606183  170932 certs.go:257] generating profile certs ...
	I1008 15:24:00.606303  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:24:00.606399  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:24:00.606474  170932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:24:00.606489  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:24:00.606509  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:24:00.606530  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:24:00.606548  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:24:00.606570  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:24:00.606589  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:24:00.606605  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:24:00.606624  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:24:00.606692  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:24:00.606854  170932 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:24:00.606878  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:24:00.606924  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:24:00.606963  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:24:00.607001  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:24:00.607090  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:24:00.607139  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.607164  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.607187  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.607847  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:24:00.628567  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:24:00.648277  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:24:00.668208  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:24:00.692981  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:24:00.711936  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:24:00.730180  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:24:00.748157  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:24:00.765418  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:24:00.783359  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:24:00.801263  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:24:00.820380  170932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:24:00.833023  170932 ssh_runner.go:195] Run: openssl version
	I1008 15:24:00.839109  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:24:00.847959  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851748  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851803  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.886598  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:24:00.895271  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:24:00.904050  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908310  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908374  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.942319  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:24:00.950674  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:24:00.959197  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963232  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963293  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.997976  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:24:01.006382  170932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:24:01.011246  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:24:01.045831  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:24:01.080738  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:24:01.117746  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:24:01.163545  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:24:01.200651  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:24:01.235623  170932 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:24:01.235701  170932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:24:01.235756  170932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:24:01.262838  170932 cri.go:89] found id: ""
	I1008 15:24:01.262915  170932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:24:01.270824  170932 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:24:01.270845  170932 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:24:01.270896  170932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:24:01.278158  170932 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:24:01.278608  170932 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.278724  170932 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:24:01.278982  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.279536  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.279976  170932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:24:01.279993  170932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:24:01.279999  170932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:24:01.280005  170932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:24:01.280012  170932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:24:01.280060  170932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:24:01.280394  170932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:24:01.288129  170932 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:24:01.288168  170932 kubeadm.go:601] duration metric: took 17.316144ms to restartPrimaryControlPlane
	I1008 15:24:01.288180  170932 kubeadm.go:402] duration metric: took 52.566594ms to StartCluster
	I1008 15:24:01.288201  170932 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.288273  170932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.288806  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.289031  170932 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:24:01.289197  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:24:01.289144  170932 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:24:01.289252  170932 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:24:01.289269  170932 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:24:01.289295  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.289295  170932 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:24:01.289366  170932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:24:01.289764  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.289770  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.292489  170932 out.go:179] * Verifying Kubernetes components...
	I1008 15:24:01.293798  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:01.310293  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.310655  170932 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:24:01.310703  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.311185  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.312731  170932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:24:01.314130  170932 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.314152  170932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:24:01.314200  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.338454  170932 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.338481  170932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:24:01.338539  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.340562  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.356940  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.398004  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:01.411760  170932 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:24:01.454106  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.466356  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:01.509002  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.509045  170932 retry.go:31] will retry after 350.610012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.520963  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.520999  170932 retry.go:31] will retry after 299.213164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.820559  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.860141  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:01.874556  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.874593  170932 retry.go:31] will retry after 266.164942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.914615  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.914645  170932 retry.go:31] will retry after 424.567426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.141023  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.194986  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.195025  170932 retry.go:31] will retry after 499.143477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.340348  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.393985  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.394031  170932 retry.go:31] will retry after 437.996301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.694684  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.750281  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.750313  170932 retry.go:31] will retry after 867.228296ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.832643  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.887793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.887828  170932 retry.go:31] will retry after 823.523521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:03.412577  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:03.617846  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:03.671770  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.671806  170932 retry.go:31] will retry after 1.456377841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.711980  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:03.765473  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.765505  170932 retry.go:31] will retry after 1.817640621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.128796  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:05.183743  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.183773  170932 retry.go:31] will retry after 2.265153126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.583676  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:05.637633  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.637664  170932 retry.go:31] will retry after 990.621367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:05.912406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:06.628981  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:06.685508  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:06.685550  170932 retry.go:31] will retry after 2.782570694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.449623  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:07.504065  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.504099  170932 retry.go:31] will retry after 3.741412594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:07.913335  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:09.469210  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:09.523862  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:09.523895  170932 retry.go:31] will retry after 5.181528653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:10.413099  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:11.245787  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:11.300714  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:11.300754  170932 retry.go:31] will retry after 3.449826104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:12.913103  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:14.705995  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:14.751595  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:14.762935  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.762977  170932 retry.go:31] will retry after 9.489237441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.806608  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.806638  170932 retry.go:31] will retry after 4.115281113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.913315  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:17.413350  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:18.922811  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:18.976958  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:18.976989  170932 retry.go:31] will retry after 5.239648896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:19.913368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:22.413029  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:24.216863  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:24.252645  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:24.273309  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.273340  170932 retry.go:31] will retry after 7.387859815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.310361  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.310404  170932 retry.go:31] will retry after 9.945221325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.913128  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:27.413070  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:29.413432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:31.662088  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:31.719810  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:31.719855  170932 retry.go:31] will retry after 13.420079077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:31.912559  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:33.913385  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:34.255764  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:34.312247  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:34.312278  170932 retry.go:31] will retry after 16.191125862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:36.413100  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:38.912942  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:41.412907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:43.913009  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:45.140914  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:45.198262  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:45.198294  170932 retry.go:31] will retry after 34.266392158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:45.913291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:48.412578  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:50.412878  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:50.504204  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:50.559793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:50.559828  170932 retry.go:31] will retry after 27.14173261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:52.413249  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:54.913400  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:57.412504  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:59.413163  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:01.913142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:04.413050  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:06.912907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:09.412961  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:11.912962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:14.412962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:16.912950  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:17.702652  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:17.758226  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:25:17.758259  170932 retry.go:31] will retry after 32.802414533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.412794  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:19.464923  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:25:19.521026  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.521181  170932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:25:21.912889  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:24.412929  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:26.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:28.913480  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:31.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:33.912646  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:36.412761  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:38.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:41.412956  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:46.412898  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:48.912819  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:50.561829  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:50.619960  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:50.620086  170932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:25:50.622296  170932 out.go:179] * Enabled addons: 
	I1008 15:25:50.623547  170932 addons.go:514] duration metric: took 1m49.334411127s for enable addons: enabled=[]
	W1008 15:25:50.912964  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:52.913239  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:55.413142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:57.912857  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:59.913308  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:02.412659  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:04.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:06.912502  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:09.412856  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:11.913398  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:14.413317  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:16.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:18.912361  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:20.912680  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:22.912778  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:24.913134  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:27.413083  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:29.912714  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:31.913049  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:34.412756  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:36.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:38.412909  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:40.912423  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:42.912843  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:45.412690  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:47.412867  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:49.413080  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:51.912848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:54.412994  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:56.413207  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:58.913394  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:01.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:03.912777  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:05.913168  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:07.913342  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:10.412475  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:12.412717  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:14.413066  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:16.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:18.912339  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:20.912432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:22.912695  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:24.913188  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:26.913438  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:29.412779  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:31.413129  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:33.413382  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:35.912652  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:37.912766  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:39.913252  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:41.913487  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:44.412715  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:46.912551  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:49.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:51.412877  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:53.413097  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:55.912620  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:58.412429  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:00.413171  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:02.912485  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:04.912746  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:07.412653  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:09.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:11.912560  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:14.412699  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:16.912516  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:19.412629  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:21.412991  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:23.413291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:25.912581  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:28.412380  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:30.412549  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:32.413358  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:34.912693  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:37.412624  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:39.412901  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:41.412960  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:43.413213  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:45.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:47.913336  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:50.412528  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:52.412731  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:54.412969  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:56.413193  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:58.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:00.912494  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:02.912626  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:04.912935  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:07.412741  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:09.412872  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:11.413111  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:13.413251  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:15.912635  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:18.412411  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:20.413378  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:22.913417  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:25.412543  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:27.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:29.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:31.913167  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:34.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:36.912368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:39.412698  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:41.412848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:45.912795  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:48.412572  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:50.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:52.412796  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:54.412926  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:56.912785  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:59.413204  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:30:01.412330  170932 node_ready.go:38] duration metric: took 6m0.000510744s for node "ha-430216" to be "Ready" ...
	I1008 15:30:01.414615  170932 out.go:203] 
	W1008 15:30:01.416405  170932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:30:01.416422  170932 out.go:285] * 
	W1008 15:30:01.418069  170932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:30:01.419605  170932 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724404802Z" level=info msg="createCtr: removing container 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.724438929Z" level=info msg="createCtr: deleting container 850890d03951745db0afe46d4f7f15cd4cf1cc2ea724f989aca273c2c5ee102a from storage" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:01 ha-430216 crio[519]: time="2025-10-08T15:30:01.726577941Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-430216_kube-system_f70f37fed14f7703e96ff570317e02f3_0" id=8b2f31f2-4d54-45af-a260-2aa837f03be6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.700615715Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=45b81ccf-d6c8-4de3-84dd-efe071a87cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.701722913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3ed643de-8ddf-439b-b4aa-a0ce1ca1f5ae name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.702689735Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-430216/kube-apiserver" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.70294592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.706311293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.706971842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.72179926Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723597965Z" level=info msg="createCtr: deleting container ID 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a from idIndex" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723635538Z" level=info msg="createCtr: removing container 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723675823Z" level=info msg="createCtr: deleting container 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a from storage" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.726022502Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.70043623Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=b5bb942d-02be-4b4a-9e0e-1f38167aa09c name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.701480966Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d1e36a74-8da6-45a7-9a03-510d4b152831 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.702461735Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.702731235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.706953008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.707504217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.722643511Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.724189488Z" level=info msg="createCtr: deleting container ID 5090b869d3363390a773cb18625d0644a78975fb5d2ae42ce115c87ca9467384 from idIndex" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.724235857Z" level=info msg="createCtr: removing container 5090b869d3363390a773cb18625d0644a78975fb5d2ae42ce115c87ca9467384" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.724272662Z" level=info msg="createCtr: deleting container 5090b869d3363390a773cb18625d0644a78975fb5d2ae42ce115c87ca9467384 from storage" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.726615387Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:30:05.566124    2340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:05.566639    2340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:05.568210    2340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:05.568644    2340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:05.569884    2340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:30:05 up  3:12,  0 user,  load average: 0.10, 0.05, 0.09
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:30:01 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:01 ha-430216 kubelet[670]:  > podSandboxID="5c80f4ae55b7598a9f6005fa50b7a289882f53a70f50aae01efaa01d217c1484"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.727060     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:01 ha-430216 kubelet[670]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-430216_kube-system(f70f37fed14f7703e96ff570317e02f3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:01 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:01 ha-430216 kubelet[670]: E1008 15:30:01.727105     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-430216" podUID="f70f37fed14f7703e96ff570317e02f3"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.338751     670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-430216?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:30:03 ha-430216 kubelet[670]: I1008 15:30:03.516191     670 kubelet_node_status.go:75] "Attempting to register node" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.516627     670 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.700053     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726335     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:03 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:03 ha-430216 kubelet[670]:  > podSandboxID="cc8245443c9f91f95c26bb18bd6337d82c83563ee0fa5ff837081576219a847b"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726439     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:03 ha-430216 kubelet[670]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:03 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726492     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.699968     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.726956     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:04 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:04 ha-430216 kubelet[670]:  > podSandboxID="6f8124f07c81c28b94b8139f7a46faa25a85ae41e02e6ec5ad65961dcf36f8a4"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.727068     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:04 ha-430216 kubelet[670]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:04 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.727102     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (300.552289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-430216" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-430216" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-430216\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-430216\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-430216\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-430216
helpers_test.go:243: (dbg) docker inspect ha-430216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	        "Created": "2025-10-08T15:06:35.863278853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 171134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:23:54.620410168Z",
	            "FinishedAt": "2025-10-08T15:23:53.312815942Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/hosts",
	        "LogPath": "/var/lib/docker/containers/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c/d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c-json.log",
	        "Name": "/ha-430216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-430216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-430216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4ddfe6c1d7ed9236f6746ee95f2f7a2eb499f9a14890911612f1eaae667fa7c",
	                "LowerDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd0c99dd9d8b639387df711e2e2c03eb9466c76af9065f7d095e0dd8c8b376b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-430216",
	                "Source": "/var/lib/docker/volumes/ha-430216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-430216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-430216",
	                "name.minikube.sigs.k8s.io": "ha-430216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e2cf91d05cea12f49402d768350213ad8540946f78303d27396fc8da1227d8",
	            "SandboxKey": "/var/run/docker/netns/37e2cf91d05c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-430216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c2:30:1a:f8:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73375df70c4fa7b96f542881ef20421c372647f314e3a590847d7ef2cc3af40e",
	                    "EndpointID": "04894192d1d8344b8201d455ae75ce4042eaf442fdd916a3acf3d43a6c3b54ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-430216",
	                        "d4ddfe6c1d7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-430216 -n ha-430216: exit status 2 (294.314977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-430216 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:14 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:15 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ kubectl │ ha-430216 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node add --alsologtostderr -v 5                                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node stop m02 --alsologtostderr -v 5                                               │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node start m02 --alsologtostderr -v 5                                              │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:16 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │ 08 Oct 25 15:17 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5                                           │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:17 UTC │                     │
	│ node    │ ha-430216 node list --alsologtostderr -v 5                                                   │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                                             │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                        │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │ 08 Oct 25 15:23 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node add --control-plane --alsologtostderr -v 5                                    │ ha-430216 │ jenkins │ v1.37.0 │ 08 Oct 25 15:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:23:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:23:54.390098  170932 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:23:54.390354  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390364  170932 out.go:374] Setting ErrFile to fd 2...
	I1008 15:23:54.390369  170932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:23:54.390587  170932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:23:54.391035  170932 out.go:368] Setting JSON to false
	I1008 15:23:54.391904  170932 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11185,"bootTime":1759925849,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:23:54.392000  170932 start.go:141] virtualization: kvm guest
	I1008 15:23:54.394179  170932 out.go:179] * [ha-430216] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:23:54.395670  170932 notify.go:220] Checking for updates...
	I1008 15:23:54.395796  170932 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:23:54.397240  170932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:23:54.398569  170932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:23:54.399837  170932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:23:54.401102  170932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:23:54.402344  170932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:23:54.404021  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:54.404562  170932 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:23:54.427962  170932 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:23:54.428101  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.482745  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.472714788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.482901  170932 docker.go:318] overlay module found
	I1008 15:23:54.484784  170932 out.go:179] * Using the docker driver based on existing profile
	I1008 15:23:54.486099  170932 start.go:305] selected driver: docker
	I1008 15:23:54.486113  170932 start.go:925] validating driver "docker" against &{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:23:54.486218  170932 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:23:54.486309  170932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:23:54.544832  170932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:23:54.535081224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:23:54.545438  170932 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:23:54.545485  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:23:54.545534  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:23:54.545577  170932 start.go:349] cluster config:
	{Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1008 15:23:54.547619  170932 out.go:179] * Starting "ha-430216" primary control-plane node in "ha-430216" cluster
	I1008 15:23:54.548799  170932 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:23:54.550084  170932 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:23:54.551306  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:23:54.551343  170932 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:23:54.551354  170932 cache.go:58] Caching tarball of preloaded images
	I1008 15:23:54.551396  170932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:23:54.551479  170932 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:23:54.551495  170932 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:23:54.551611  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.571805  170932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:23:54.571832  170932 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:23:54.571847  170932 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:23:54.571871  170932 start.go:360] acquireMachinesLock for ha-430216: {Name:mk6fb79887a3c5150b9adb9dafbe7081d5d26349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:23:54.571935  170932 start.go:364] duration metric: took 46.811µs to acquireMachinesLock for "ha-430216"
	I1008 15:23:54.571952  170932 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:23:54.571957  170932 fix.go:54] fixHost starting: 
	I1008 15:23:54.572177  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.590507  170932 fix.go:112] recreateIfNeeded on ha-430216: state=Stopped err=<nil>
	W1008 15:23:54.590541  170932 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:23:54.592367  170932 out.go:252] * Restarting existing docker container for "ha-430216" ...
	I1008 15:23:54.592465  170932 cli_runner.go:164] Run: docker start ha-430216
	I1008 15:23:54.836670  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:23:54.855000  170932 kic.go:430] container "ha-430216" state is running.
	I1008 15:23:54.855424  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:54.872582  170932 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/config.json ...
	I1008 15:23:54.872800  170932 machine.go:93] provisionDockerMachine start ...
	I1008 15:23:54.872862  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:54.890640  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:54.890934  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:54.890952  170932 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:23:54.891655  170932 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34990->127.0.0.1:32793: read: connection reset by peer
	I1008 15:23:58.039834  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.039875  170932 ubuntu.go:182] provisioning hostname "ha-430216"
	I1008 15:23:58.039947  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.058681  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.058904  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.058916  170932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430216 && echo "ha-430216" | sudo tee /etc/hostname
	I1008 15:23:58.215272  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430216
	
	I1008 15:23:58.215342  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.232894  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.233113  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.233130  170932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430216/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:23:58.379259  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:23:58.379290  170932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:23:58.379314  170932 ubuntu.go:190] setting up certificates
	I1008 15:23:58.379327  170932 provision.go:84] configureAuth start
	I1008 15:23:58.379406  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:58.396766  170932 provision.go:143] copyHostCerts
	I1008 15:23:58.396820  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396849  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:23:58.396858  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:23:58.396924  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:23:58.397017  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397036  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:23:58.397043  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:23:58.397070  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:23:58.397136  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397153  170932 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:23:58.397159  170932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:23:58.397183  170932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:23:58.397247  170932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.ha-430216 san=[127.0.0.1 192.168.49.2 ha-430216 localhost minikube]
	I1008 15:23:58.536180  170932 provision.go:177] copyRemoteCerts
	I1008 15:23:58.536249  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:23:58.536293  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.554351  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:58.657806  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 15:23:58.657871  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:23:58.675737  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 15:23:58.675790  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 15:23:58.692969  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 15:23:58.693030  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:23:58.710763  170932 provision.go:87] duration metric: took 331.416748ms to configureAuth
	I1008 15:23:58.710798  170932 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:23:58.711012  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:23:58.711117  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:58.728810  170932 main.go:141] libmachine: Using SSH client type: native
	I1008 15:23:58.729089  170932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1008 15:23:58.729109  170932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:23:58.987429  170932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:23:58.987476  170932 machine.go:96] duration metric: took 4.114660829s to provisionDockerMachine
	I1008 15:23:58.987492  170932 start.go:293] postStartSetup for "ha-430216" (driver="docker")
	I1008 15:23:58.987506  170932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:23:58.987579  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:23:58.987638  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.004627  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.108395  170932 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:23:59.111973  170932 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:23:59.111998  170932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:23:59.112007  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:23:59.112055  170932 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:23:59.112144  170932 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:23:59.112167  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /etc/ssl/certs/989002.pem
	I1008 15:23:59.112248  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:23:59.119933  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:23:59.137911  170932 start.go:296] duration metric: took 150.401166ms for postStartSetup
	I1008 15:23:59.137987  170932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:23:59.138020  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.155852  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.255756  170932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:23:59.260399  170932 fix.go:56] duration metric: took 4.688432219s for fixHost
	I1008 15:23:59.260429  170932 start.go:83] releasing machines lock for "ha-430216", held for 4.688483389s
	I1008 15:23:59.260521  170932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-430216
	I1008 15:23:59.277825  170932 ssh_runner.go:195] Run: cat /version.json
	I1008 15:23:59.277877  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.277923  170932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:23:59.278022  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:23:59.295429  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.296320  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:23:59.446135  170932 ssh_runner.go:195] Run: systemctl --version
	I1008 15:23:59.452641  170932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:23:59.487637  170932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:23:59.492434  170932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:23:59.492513  170932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:23:59.500423  170932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:23:59.500461  170932 start.go:495] detecting cgroup driver to use...
	I1008 15:23:59.500493  170932 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:23:59.500529  170932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:23:59.515264  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:23:59.528404  170932 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:23:59.528483  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:23:59.543183  170932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:23:59.555554  170932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:23:59.635371  170932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:23:59.716233  170932 docker.go:234] disabling docker service ...
	I1008 15:23:59.716295  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:23:59.730610  170932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:23:59.743097  170932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:23:59.823687  170932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:23:59.905402  170932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:23:59.918149  170932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:23:59.932053  170932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:23:59.932109  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.941582  170932 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:23:59.941641  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.951328  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.960338  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.969240  170932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:23:59.977804  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.986975  170932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:23:59.995767  170932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:24:00.004950  170932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:24:00.012696  170932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:24:00.020160  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.097921  170932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:24:00.199137  170932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:24:00.199212  170932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:24:00.203530  170932 start.go:563] Will wait 60s for crictl version
	I1008 15:24:00.203585  170932 ssh_runner.go:195] Run: which crictl
	I1008 15:24:00.207581  170932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:24:00.233465  170932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:24:00.233549  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.261379  170932 ssh_runner.go:195] Run: crio --version
	I1008 15:24:00.291399  170932 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:24:00.292703  170932 cli_runner.go:164] Run: docker network inspect ha-430216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:24:00.309684  170932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 15:24:00.313961  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.324165  170932 kubeadm.go:883] updating cluster {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:24:00.324285  170932 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:24:00.324335  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.356265  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.356286  170932 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:24:00.356332  170932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:24:00.382025  170932 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:24:00.382049  170932 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:24:00.382057  170932 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 15:24:00.382151  170932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-430216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:24:00.382262  170932 ssh_runner.go:195] Run: crio config
	I1008 15:24:00.427970  170932 cni.go:84] Creating CNI manager for ""
	I1008 15:24:00.427994  170932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 15:24:00.428012  170932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:24:00.428037  170932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430216 NodeName:ha-430216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:24:00.428148  170932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:24:00.428211  170932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:24:00.436556  170932 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:24:00.436625  170932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:24:00.444239  170932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1008 15:24:00.456696  170932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:24:00.469551  170932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1008 15:24:00.482344  170932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:24:00.486243  170932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:24:00.496323  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:00.583018  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:00.605888  170932 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216 for IP: 192.168.49.2
	I1008 15:24:00.605921  170932 certs.go:195] generating shared ca certs ...
	I1008 15:24:00.605944  170932 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:00.606081  170932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:24:00.606165  170932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:24:00.606183  170932 certs.go:257] generating profile certs ...
	I1008 15:24:00.606303  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key
	I1008 15:24:00.606399  170932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key.3c4f0d92
	I1008 15:24:00.606474  170932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key
	I1008 15:24:00.606489  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 15:24:00.606509  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 15:24:00.606530  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 15:24:00.606548  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 15:24:00.606570  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 15:24:00.606589  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 15:24:00.606605  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 15:24:00.606624  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 15:24:00.606692  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:24:00.606854  170932 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:24:00.606878  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:24:00.606924  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:24:00.606963  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:24:00.607001  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:24:00.607090  170932 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:24:00.607139  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem -> /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.607164  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.607187  170932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.607847  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:24:00.628567  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:24:00.648277  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:24:00.668208  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:24:00.692981  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 15:24:00.711936  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 15:24:00.730180  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:24:00.748157  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:24:00.765418  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:24:00.783359  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:24:00.801263  170932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:24:00.820380  170932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:24:00.833023  170932 ssh_runner.go:195] Run: openssl version
	I1008 15:24:00.839109  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:24:00.847959  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851748  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.851803  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:24:00.886598  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:24:00.895271  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:24:00.904050  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908310  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.908374  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:24:00.942319  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:24:00.950674  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:24:00.959197  170932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963232  170932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.963293  170932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:24:00.997976  170932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:24:01.006382  170932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:24:01.011246  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:24:01.045831  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:24:01.080738  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:24:01.117746  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:24:01.163545  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:24:01.200651  170932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:24:01.235623  170932 kubeadm.go:400] StartCluster: {Name:ha-430216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-430216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:24:01.235701  170932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:24:01.235756  170932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:24:01.262838  170932 cri.go:89] found id: ""
	I1008 15:24:01.262915  170932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:24:01.270824  170932 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:24:01.270845  170932 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:24:01.270896  170932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:24:01.278158  170932 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:24:01.278608  170932 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-430216" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.278724  170932 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-94984/kubeconfig needs updating (will repair): [kubeconfig missing "ha-430216" cluster setting kubeconfig missing "ha-430216" context setting]
	I1008 15:24:01.278982  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.279536  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.279976  170932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:24:01.279993  170932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:24:01.279999  170932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:24:01.280005  170932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:24:01.280012  170932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:24:01.280060  170932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1008 15:24:01.280394  170932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:24:01.288129  170932 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1008 15:24:01.288168  170932 kubeadm.go:601] duration metric: took 17.316144ms to restartPrimaryControlPlane
	I1008 15:24:01.288180  170932 kubeadm.go:402] duration metric: took 52.566594ms to StartCluster
	I1008 15:24:01.288201  170932 settings.go:142] acquiring lock: {Name:mk5d88d62caa531d32f56bf7e9015a41f8a013a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.288273  170932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:24:01.288806  170932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/kubeconfig: {Name:mkad168af361fe944e4c5d988f1cd5051e1bbffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:24:01.289031  170932 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:24:01.289197  170932 config.go:182] Loaded profile config "ha-430216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:24:01.289144  170932 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:24:01.289252  170932 addons.go:69] Setting storage-provisioner=true in profile "ha-430216"
	I1008 15:24:01.289269  170932 addons.go:238] Setting addon storage-provisioner=true in "ha-430216"
	I1008 15:24:01.289295  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.289295  170932 addons.go:69] Setting default-storageclass=true in profile "ha-430216"
	I1008 15:24:01.289366  170932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430216"
	I1008 15:24:01.289764  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.289770  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.292489  170932 out.go:179] * Verifying Kubernetes components...
	I1008 15:24:01.293798  170932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:24:01.310293  170932 kapi.go:59] client config for ha-430216: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/profiles/ha-430216/client.key", CAFile:"/home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:24:01.310655  170932 addons.go:238] Setting addon default-storageclass=true in "ha-430216"
	I1008 15:24:01.310703  170932 host.go:66] Checking if "ha-430216" exists ...
	I1008 15:24:01.311185  170932 cli_runner.go:164] Run: docker container inspect ha-430216 --format={{.State.Status}}
	I1008 15:24:01.312731  170932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:24:01.314130  170932 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.314152  170932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:24:01.314200  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.338454  170932 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.338481  170932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:24:01.338539  170932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-430216
	I1008 15:24:01.340562  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.356940  170932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/ha-430216/id_rsa Username:docker}
	I1008 15:24:01.398004  170932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:24:01.411760  170932 node_ready.go:35] waiting up to 6m0s for node "ha-430216" to be "Ready" ...
	I1008 15:24:01.454106  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:01.466356  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:01.509002  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.509045  170932 retry.go:31] will retry after 350.610012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.520963  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.520999  170932 retry.go:31] will retry after 299.213164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.820559  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:01.860141  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:01.874556  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.874593  170932 retry.go:31] will retry after 266.164942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:01.914615  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:01.914645  170932 retry.go:31] will retry after 424.567426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.141023  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.194986  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.195025  170932 retry.go:31] will retry after 499.143477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.340348  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.393985  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.394031  170932 retry.go:31] will retry after 437.996301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.694684  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:02.750281  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.750313  170932 retry.go:31] will retry after 867.228296ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.832643  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:02.887793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:02.887828  170932 retry.go:31] will retry after 823.523521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:03.412577  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:03.617846  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:03.671770  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.671806  170932 retry.go:31] will retry after 1.456377841s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.711980  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:03.765473  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:03.765505  170932 retry.go:31] will retry after 1.817640621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.128796  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:05.183743  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.183773  170932 retry.go:31] will retry after 2.265153126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.583676  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:05.637633  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:05.637664  170932 retry.go:31] will retry after 990.621367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:05.912406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:06.628981  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:06.685508  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:06.685550  170932 retry.go:31] will retry after 2.782570694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.449623  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:07.504065  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:07.504099  170932 retry.go:31] will retry after 3.741412594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:07.913335  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:09.469210  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:09.523862  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:09.523895  170932 retry.go:31] will retry after 5.181528653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:10.413099  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:11.245787  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:11.300714  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:11.300754  170932 retry.go:31] will retry after 3.449826104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:12.913103  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:14.705995  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:24:14.751595  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:14.762935  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.762977  170932 retry.go:31] will retry after 9.489237441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.806608  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:14.806638  170932 retry.go:31] will retry after 4.115281113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:14.913315  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:17.413350  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:18.922811  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:18.976958  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:18.976989  170932 retry.go:31] will retry after 5.239648896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:19.913368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:22.413029  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:24.216863  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:24:24.252645  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:24.273309  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.273340  170932 retry.go:31] will retry after 7.387859815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.310361  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:24.310404  170932 retry.go:31] will retry after 9.945221325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:24.913128  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:27.413070  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:29.413432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:31.662088  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:31.719810  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:31.719855  170932 retry.go:31] will retry after 13.420079077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:31.912559  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:33.913385  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:34.255764  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:34.312247  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:34.312278  170932 retry.go:31] will retry after 16.191125862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:36.413100  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:38.912942  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:41.412907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:43.913009  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:45.140914  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:24:45.198262  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:45.198294  170932 retry.go:31] will retry after 34.266392158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:45.913291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:48.412578  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:50.412878  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:24:50.504204  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:24:50.559793  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:24:50.559828  170932 retry.go:31] will retry after 27.14173261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:24:52.413249  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:54.913400  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:57.412504  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:24:59.413163  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:01.913142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:04.413050  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:06.912907  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:09.412961  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:11.912962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:14.412962  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:16.912950  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:17.702652  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:17.758226  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 15:25:17.758259  170932 retry.go:31] will retry after 32.802414533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.412794  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:19.464923  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 15:25:19.521026  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:19.521181  170932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1008 15:25:21.912889  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:24.412929  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:26.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:28.913480  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:31.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:33.912646  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:36.412761  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:38.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:41.412956  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:46.412898  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:48.912819  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:25:50.561829  170932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 15:25:50.619960  170932 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 15:25:50.620086  170932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 15:25:50.622296  170932 out.go:179] * Enabled addons: 
	I1008 15:25:50.623547  170932 addons.go:514] duration metric: took 1m49.334411127s for enable addons: enabled=[]
	W1008 15:25:50.912964  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:52.913239  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:55.413142  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:57.912857  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:25:59.913308  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:02.412659  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:04.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:06.912502  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:09.412856  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:11.913398  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:14.413317  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:16.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:18.912361  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:20.912680  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:22.912778  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:24.913134  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:27.413083  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:29.912714  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:31.913049  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:34.412756  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:36.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:38.412909  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:40.912423  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:42.912843  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:45.412690  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:47.412867  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:49.413080  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:51.912848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:54.412994  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:56.413207  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:26:58.913394  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:01.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:03.912777  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:05.913168  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:07.913342  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:10.412475  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:12.412717  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:14.413066  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:16.413297  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:18.912339  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:20.912432  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:22.912695  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:24.913188  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:26.913438  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:29.412779  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:31.413129  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:33.413382  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:35.912652  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:37.912766  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:39.913252  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:41.913487  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:44.412715  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:46.912551  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:49.412793  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:51.412877  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:53.413097  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:55.912620  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:27:58.412429  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:00.413171  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:02.912485  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:04.912746  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:07.412653  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:09.412998  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:11.912560  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:14.412699  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:16.912516  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:19.412629  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:21.412991  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:23.413291  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:25.912581  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:28.412380  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:30.412549  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:32.413358  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:34.912693  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:37.412624  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:39.412901  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:41.412960  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:43.413213  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:45.413406  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:47.913336  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:50.412528  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:52.412731  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:54.412969  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:56.413193  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:28:58.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:00.912494  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:02.912626  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:04.912935  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:07.412741  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:09.412872  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:11.413111  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:13.413251  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:15.912635  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:18.412411  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:20.413378  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:22.913417  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:25.412543  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:27.413352  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:29.912740  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:31.913167  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:34.412736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:36.912368  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:39.412698  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:41.412848  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:43.912736  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:45.912795  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:48.412572  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:50.412701  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:52.412796  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:54.412926  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:56.912785  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	W1008 15:29:59.413204  170932 node_ready.go:55] error getting node "ha-430216" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-430216": dial tcp 192.168.49.2:8443: connect: connection refused
	I1008 15:30:01.412330  170932 node_ready.go:38] duration metric: took 6m0.000510744s for node "ha-430216" to be "Ready" ...
	I1008 15:30:01.414615  170932 out.go:203] 
	W1008 15:30:01.416405  170932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1008 15:30:01.416422  170932 out.go:285] * 
	W1008 15:30:01.418069  170932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:30:01.419605  170932 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723635538Z" level=info msg="createCtr: removing container 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.723675823Z" level=info msg="createCtr: deleting container 8f7241f654e6ccbeb01e4c8cd3125f3101657eacfe2739647868cf2380b88f7a from storage" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:03 ha-430216 crio[519]: time="2025-10-08T15:30:03.726022502Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-430216_kube-system_283ea4febd1b5fbbe66acd75543245b9_0" id=52905fe7-8b60-432a-b5f7-00deb0bde875 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.70043623Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=b5bb942d-02be-4b4a-9e0e-1f38167aa09c name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.701480966Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d1e36a74-8da6-45a7-9a03-510d4b152831 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.702461735Z" level=info msg="Creating container: kube-system/etcd-ha-430216/etcd" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.702731235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.706953008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.707504217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.722643511Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.724189488Z" level=info msg="createCtr: deleting container ID 5090b869d3363390a773cb18625d0644a78975fb5d2ae42ce115c87ca9467384 from idIndex" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.724235857Z" level=info msg="createCtr: removing container 5090b869d3363390a773cb18625d0644a78975fb5d2ae42ce115c87ca9467384" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.724272662Z" level=info msg="createCtr: deleting container 5090b869d3363390a773cb18625d0644a78975fb5d2ae42ce115c87ca9467384 from storage" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:04 ha-430216 crio[519]: time="2025-10-08T15:30:04.726615387Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-430216_kube-system_cd002b780418a273a4d59036de4eb1bc_0" id=fa64f126-c7cf-4439-a7f4-62a68cd8bb6e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.700580296Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a013fa41-fbae-465f-9040-20bb5c426318 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.701748776Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e1319654-983c-49db-a053-306e2159224c name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.702777519Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-430216/kube-scheduler" id=ce43928b-d6ee-48f5-9599-b464bd3078de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.70304983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.707305574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.707794837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.728595781Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ce43928b-d6ee-48f5-9599-b464bd3078de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.730349128Z" level=info msg="createCtr: deleting container ID b78370354311e9db9edcf690d8b77efb82c67c37a3a9a6e3d472f9820d21b593 from idIndex" id=ce43928b-d6ee-48f5-9599-b464bd3078de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.730393886Z" level=info msg="createCtr: removing container b78370354311e9db9edcf690d8b77efb82c67c37a3a9a6e3d472f9820d21b593" id=ce43928b-d6ee-48f5-9599-b464bd3078de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.730428215Z" level=info msg="createCtr: deleting container b78370354311e9db9edcf690d8b77efb82c67c37a3a9a6e3d472f9820d21b593 from storage" id=ce43928b-d6ee-48f5-9599-b464bd3078de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:30:06 ha-430216 crio[519]: time="2025-10-08T15:30:06.732810278Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-430216_kube-system_3ea77c96b9a4c1f5633ffb0e32f199a4_0" id=ce43928b-d6ee-48f5-9599-b464bd3078de name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:30:07.180278    2517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:07.180867    2517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:07.182510    2517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:07.182941    2517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:30:07.184392    2517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] i8042: Warning: Keylock active
	[  +0.015172] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003530] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000669] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000750] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000722] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000796] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000651] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000663] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.516613] block sda: the capability attribute has been deprecated.
	[  +0.091494] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025596] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.053784] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:30:07 up  3:12,  0 user,  load average: 0.09, 0.05, 0.09
	Linux ha-430216 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.700053     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726335     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:03 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:03 ha-430216 kubelet[670]:  > podSandboxID="cc8245443c9f91f95c26bb18bd6337d82c83563ee0fa5ff837081576219a847b"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726439     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:03 ha-430216 kubelet[670]:         container kube-apiserver start failed in pod kube-apiserver-ha-430216_kube-system(283ea4febd1b5fbbe66acd75543245b9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:03 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:03 ha-430216 kubelet[670]: E1008 15:30:03.726492     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-430216" podUID="283ea4febd1b5fbbe66acd75543245b9"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.699968     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.726956     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:04 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:04 ha-430216 kubelet[670]:  > podSandboxID="6f8124f07c81c28b94b8139f7a46faa25a85ae41e02e6ec5ad65961dcf36f8a4"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.727068     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:04 ha-430216 kubelet[670]:         container etcd start failed in pod etcd-ha-430216_kube-system(cd002b780418a273a4d59036de4eb1bc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:04 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:04 ha-430216 kubelet[670]: E1008 15:30:04.727102     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-430216" podUID="cd002b780418a273a4d59036de4eb1bc"
	Oct 08 15:30:06 ha-430216 kubelet[670]: E1008 15:30:06.700104     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-430216\" not found" node="ha-430216"
	Oct 08 15:30:06 ha-430216 kubelet[670]: E1008 15:30:06.733225     670 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:30:06 ha-430216 kubelet[670]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:06 ha-430216 kubelet[670]:  > podSandboxID="0ab57d9a2591adf8ab95f95fc92256df7785fedbd6767cf8e3bf4f53e2281c5b"
	Oct 08 15:30:06 ha-430216 kubelet[670]: E1008 15:30:06.733346     670 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:30:06 ha-430216 kubelet[670]:         container kube-scheduler start failed in pod kube-scheduler-ha-430216_kube-system(3ea77c96b9a4c1f5633ffb0e32f199a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:30:06 ha-430216 kubelet[670]:  > logger="UnhandledError"
	Oct 08 15:30:06 ha-430216 kubelet[670]: E1008 15:30:06.733387     670 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-430216" podUID="3ea77c96b9a4c1f5633ffb0e32f199a4"
	Oct 08 15:30:06 ha-430216 kubelet[670]: E1008 15:30:06.831268     670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-430216.186c8d69f9632b12  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-430216,UID:ha-430216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-430216 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-430216,},FirstTimestamp:2025-10-08 15:24:00.690129682 +0000 UTC m=+0.082375825,LastTimestamp:2025-10-08 15:24:00.690129682 +0000 UTC m=+0.082375825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-430216,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430216 -n ha-430216: exit status 2 (301.033769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-430216" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (495.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-497079 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1008 15:32:26.902500   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:37:26.907978   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-497079 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m15.607153175s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e2815c7-b867-4ba6-8bbd-72d29534d675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-497079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bb4a9d2-4e73-409d-9e73-715586eade50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"e9cca9c4-9f6c-40dd-a2d2-6418761f6eb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd168072-f2ea-4d1d-878d-14c3dd7a68af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig"}}
	{"specversion":"1.0","id":"0312d74f-9821-47b0-9e87-667325c58b5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube"}}
	{"specversion":"1.0","id":"16650b28-2a90-4675-a2b7-8d4a741181ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"01cc1d97-ee2f-4036-926d-48f93db0f73e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33d94a04-13d5-44f2-972d-64008b1d63b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e90ea06-4d6b-48ef-b38e-9e4d9f77f77b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cd23d6c4-4236-4210-928a-7b2068badb54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-497079\" primary control-plane node in \"json-output-497079\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"715fb521-eeca-4bda-8ece-c29962b710f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c3065c0-a4f7-4901-92cf-d7d934bee17f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d1241fc-522a-40d4-8520-bb52bf69b927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"51c83b33-bade-420a-9a13-795d3e79f8fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"83c4e62f-d1e5-4f8a-b7c5-1cc3e9a93c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1633173-fd30-4012-bad2-a45450bd9b7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-497079 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-497079 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.999862ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000965454s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001182893s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001315688s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using
your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check
failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"be20e484-1f5f-4d29-9115-14ccf6833c08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d417098-1dfd-49a8-9ba4-caf81c43b75d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba7cb344-65dc-496a-9651-58f80600ff4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.92967ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00052377s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000560776s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000827414s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pau
se'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:1
0259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"7978ad96-ee9a-4195-950a-022e443f4622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.92967ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00052377s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000560776s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000827414s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-schedul
er check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"4c6f3c0b-4d16-4f12-840c-98655947cda2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-497079 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (495.61s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0e2815c7-b867-4ba6-8bbd-72d29534d675
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-497079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 4bb4a9d2-4e73-409d-9e73-715586eade50
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21681"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e9cca9c4-9f6c-40dd-a2d2-6418761f6eb5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cd168072-f2ea-4d1d-878d-14c3dd7a68af
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0312d74f-9821-47b0-9e87-667325c58b5d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 16650b28-2a90-4675-a2b7-8d4a741181ee
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 01cc1d97-ee2f-4036-926d-48f93db0f73e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 33d94a04-13d5-44f2-972d-64008b1d63b9
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1e90ea06-4d6b-48ef-b38e-9e4d9f77f77b
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: cd23d6c4-4236-4210-928a-7b2068badb54
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-497079\" primary control-plane node in \"json-output-497079\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 715fb521-eeca-4bda-8ece-c29962b710f8
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0c3065c0-a4f7-4901-92cf-d7d934bee17f
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7d1241fc-522a-40d4-8520-bb52bf69b927
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 51c83b33-bade-420a-9a13-795d3e79f8fe
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 83c4e62f-d1e5-4f8a-b7c5-1cc3e9a93c71
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d1633173-fd30-4012-bad2-a45450bd9b7f
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-497079 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-497079 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.999862ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000965454s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001182893s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001315688s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"h
ttps://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: be20e484-1f5f-4d29-9115-14ccf6833c08
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2d417098-1dfd-49a8-9ba4-caf81c43b75d
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: ba7cb344-65dc-496a-9651-58f80600ff4b
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.92967ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[c
ontrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00052377s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000560776s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000827414s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7978ad96-ee9a-4195-950a-022e443f4622
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.92967ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[con
trol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00052377s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000560776s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000827414s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNIN
G SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 4c6f3c0b-4d16-4f12-840c-98655947cda2
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0e2815c7-b867-4ba6-8bbd-72d29534d675
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-497079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 4bb4a9d2-4e73-409d-9e73-715586eade50
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21681"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e9cca9c4-9f6c-40dd-a2d2-6418761f6eb5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cd168072-f2ea-4d1d-878d-14c3dd7a68af
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0312d74f-9821-47b0-9e87-667325c58b5d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 16650b28-2a90-4675-a2b7-8d4a741181ee
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 01cc1d97-ee2f-4036-926d-48f93db0f73e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 33d94a04-13d5-44f2-972d-64008b1d63b9
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1e90ea06-4d6b-48ef-b38e-9e4d9f77f77b
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: cd23d6c4-4236-4210-928a-7b2068badb54
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-497079\" primary control-plane node in \"json-output-497079\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 715fb521-eeca-4bda-8ece-c29962b710f8
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0c3065c0-a4f7-4901-92cf-d7d934bee17f
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7d1241fc-522a-40d4-8520-bb52bf69b927
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 51c83b33-bade-420a-9a13-795d3e79f8fe
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 83c4e62f-d1e5-4f8a-b7c5-1cc3e9a93c71
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d1633173-fd30-4012-bad2-a45450bd9b7f
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-497079 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-497079 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.999862ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000965454s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001182893s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001315688s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"h
ttps://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: be20e484-1f5f-4d29-9115-14ccf6833c08
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2d417098-1dfd-49a8-9ba4-caf81c43b75d
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: ba7cb344-65dc-496a-9651-58f80600ff4b
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.92967ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[c
ontrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00052377s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000560776s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000827414s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7978ad96-ee9a-4195-950a-022e443f4622
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.92967ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[con
trol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00052377s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000560776s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000827414s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNIN
G SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 4c6f3c0b-4d16-4f12-840c-98655947cda2
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (501.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-127124 --driver=docker  --container-runtime=crio
E1008 15:42:26.905632   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:47:26.911266   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-127124 --driver=docker  --container-runtime=crio: exit status 80 (8m18.059586759s)

                                                
                                                
-- stdout --
	* [first-127124] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-127124" primary control-plane node in "first-127124" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-127124 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-127124 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001152758s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000161086s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000257177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000615451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794957s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000074675s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032273s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000503503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794957s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000074675s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032273s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000503503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-127124 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-08 15:49:07.218110174 +0000 UTC m=+5463.459833126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-130389
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-130389: exit status 1 (27.601685ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-130389

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-130389 -n second-130389
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-130389 -n second-130389: exit status 85 (55.758639ms)

                                                
                                                
-- stdout --
	* Profile "second-130389" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-130389"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-130389" host is not running, skipping log retrieval (state="* Profile \"second-130389\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-130389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-130389
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-08 15:49:07.444664391 +0000 UTC m=+5463.686387320
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-127124
helpers_test.go:243: (dbg) docker inspect first-127124:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b",
	        "Created": "2025-10-08T15:40:54.292097229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T15:40:54.326163599Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b/hosts",
	        "LogPath": "/var/lib/docker/containers/c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b/c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b-json.log",
	        "Name": "/first-127124",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "first-127124:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-127124",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1fec9986b8c123794a6551e39c6dd9a0855393f1d644230c6cf35aa1647772b",
	                "LowerDir": "/var/lib/docker/overlay2/fb59b8f8b2e7a32afd297de29ddfe1582b9b43d50d37dcd698fd8ab4af3f46d5-init/diff:/var/lib/docker/overlay2/74096949aea802a550bb8813602b53a92c02916e339acbc43ec95b46a7b2289c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fb59b8f8b2e7a32afd297de29ddfe1582b9b43d50d37dcd698fd8ab4af3f46d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fb59b8f8b2e7a32afd297de29ddfe1582b9b43d50d37dcd698fd8ab4af3f46d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fb59b8f8b2e7a32afd297de29ddfe1582b9b43d50d37dcd698fd8ab4af3f46d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-127124",
	                "Source": "/var/lib/docker/volumes/first-127124/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-127124",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-127124",
	                "name.minikube.sigs.k8s.io": "first-127124",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "42d02d6b3d0f5bc4040f9a52494e33d48ce9a46fa38bb350b37d14a4af322d65",
	            "SandboxKey": "/var/run/docker/netns/42d02d6b3d0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-127124": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ce:35:f8:15:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5e623ba1aa5ab62ed996d3372e77e3eb6f3863f6adfaa9f1402f4bd80991e9db",
	                    "EndpointID": "990f8197d097604b3cb600b3d03b72233d36e03e668195d8e4e8f21ef7923f7e",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-127124",
	                        "c1fec9986b8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-127124 -n first-127124
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-127124 -n first-127124: exit status 6 (290.851972ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:49:07.739608  208693 status.go:458] kubeconfig endpoint: get endpoint: "first-127124" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-127124 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-430216 node delete m03 --alsologtostderr -v 5                                                                        │ ha-430216                │ jenkins  │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ stop    │ ha-430216 stop --alsologtostderr -v 5                                                                                   │ ha-430216                │ jenkins  │ v1.37.0 │ 08 Oct 25 15:23 UTC │ 08 Oct 25 15:23 UTC │
	│ start   │ ha-430216 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-430216                │ jenkins  │ v1.37.0 │ 08 Oct 25 15:23 UTC │                     │
	│ node    │ ha-430216 node add --control-plane --alsologtostderr -v 5                                                               │ ha-430216                │ jenkins  │ v1.37.0 │ 08 Oct 25 15:30 UTC │                     │
	│ delete  │ -p ha-430216                                                                                                            │ ha-430216                │ jenkins  │ v1.37.0 │ 08 Oct 25 15:30 UTC │ 08 Oct 25 15:30 UTC │
	│ start   │ -p json-output-497079 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-497079       │ testUser │ v1.37.0 │ 08 Oct 25 15:30 UTC │                     │
	│ pause   │ -p json-output-497079 --output=json --user=testUser                                                                     │ json-output-497079       │ testUser │ v1.37.0 │ 08 Oct 25 15:38 UTC │ 08 Oct 25 15:38 UTC │
	│ unpause │ -p json-output-497079 --output=json --user=testUser                                                                     │ json-output-497079       │ testUser │ v1.37.0 │ 08 Oct 25 15:38 UTC │ 08 Oct 25 15:38 UTC │
	│ stop    │ -p json-output-497079 --output=json --user=testUser                                                                     │ json-output-497079       │ testUser │ v1.37.0 │ 08 Oct 25 15:38 UTC │ 08 Oct 25 15:38 UTC │
	│ delete  │ -p json-output-497079                                                                                                   │ json-output-497079       │ jenkins  │ v1.37.0 │ 08 Oct 25 15:38 UTC │ 08 Oct 25 15:38 UTC │
	│ start   │ -p json-output-error-849101 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-849101 │ jenkins  │ v1.37.0 │ 08 Oct 25 15:38 UTC │                     │
	│ delete  │ -p json-output-error-849101                                                                                             │ json-output-error-849101 │ jenkins  │ v1.37.0 │ 08 Oct 25 15:38 UTC │ 08 Oct 25 15:38 UTC │
	│ start   │ -p docker-network-647933 --network=                                                                                     │ docker-network-647933    │ jenkins  │ v1.37.0 │ 08 Oct 25 15:38 UTC │ 08 Oct 25 15:39 UTC │
	│ delete  │ -p docker-network-647933                                                                                                │ docker-network-647933    │ jenkins  │ v1.37.0 │ 08 Oct 25 15:39 UTC │ 08 Oct 25 15:39 UTC │
	│ start   │ -p docker-network-206445 --network=bridge                                                                               │ docker-network-206445    │ jenkins  │ v1.37.0 │ 08 Oct 25 15:39 UTC │ 08 Oct 25 15:39 UTC │
	│ delete  │ -p docker-network-206445                                                                                                │ docker-network-206445    │ jenkins  │ v1.37.0 │ 08 Oct 25 15:39 UTC │ 08 Oct 25 15:39 UTC │
	│ start   │ -p existing-network-004069 --network=existing-network                                                                   │ existing-network-004069  │ jenkins  │ v1.37.0 │ 08 Oct 25 15:39 UTC │ 08 Oct 25 15:39 UTC │
	│ delete  │ -p existing-network-004069                                                                                              │ existing-network-004069  │ jenkins  │ v1.37.0 │ 08 Oct 25 15:39 UTC │ 08 Oct 25 15:39 UTC │
	│ start   │ -p custom-subnet-512804 --subnet=192.168.60.0/24                                                                        │ custom-subnet-512804     │ jenkins  │ v1.37.0 │ 08 Oct 25 15:39 UTC │ 08 Oct 25 15:40 UTC │
	│ delete  │ -p custom-subnet-512804                                                                                                 │ custom-subnet-512804     │ jenkins  │ v1.37.0 │ 08 Oct 25 15:40 UTC │ 08 Oct 25 15:40 UTC │
	│ start   │ -p static-ip-210984 --static-ip=192.168.200.200                                                                         │ static-ip-210984         │ jenkins  │ v1.37.0 │ 08 Oct 25 15:40 UTC │ 08 Oct 25 15:40 UTC │
	│ ip      │ static-ip-210984 ip                                                                                                     │ static-ip-210984         │ jenkins  │ v1.37.0 │ 08 Oct 25 15:40 UTC │ 08 Oct 25 15:40 UTC │
	│ delete  │ -p static-ip-210984                                                                                                     │ static-ip-210984         │ jenkins  │ v1.37.0 │ 08 Oct 25 15:40 UTC │ 08 Oct 25 15:40 UTC │
	│ start   │ -p first-127124 --driver=docker  --container-runtime=crio                                                               │ first-127124             │ jenkins  │ v1.37.0 │ 08 Oct 25 15:40 UTC │                     │
	│ delete  │ -p second-130389                                                                                                        │ second-130389            │ jenkins  │ v1.37.0 │ 08 Oct 25 15:49 UTC │ 08 Oct 25 15:49 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:40:49
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:40:49.199388  203620 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:40:49.199660  203620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:40:49.199663  203620 out.go:374] Setting ErrFile to fd 2...
	I1008 15:40:49.199667  203620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:40:49.199884  203620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:40:49.200354  203620 out.go:368] Setting JSON to false
	I1008 15:40:49.201285  203620 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":12200,"bootTime":1759925849,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:40:49.201369  203620 start.go:141] virtualization: kvm guest
	I1008 15:40:49.203403  203620 out.go:179] * [first-127124] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:40:49.204671  203620 notify.go:220] Checking for updates...
	I1008 15:40:49.204701  203620 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:40:49.206012  203620 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:40:49.207347  203620 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:40:49.208555  203620 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:40:49.209640  203620 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:40:49.210771  203620 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:40:49.212152  203620 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:40:49.234850  203620 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:40:49.234919  203620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:40:49.290817  203620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:40:49.280353456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:40:49.290920  203620 docker.go:318] overlay module found
	I1008 15:40:49.292781  203620 out.go:179] * Using the docker driver based on user configuration
	I1008 15:40:49.294075  203620 start.go:305] selected driver: docker
	I1008 15:40:49.294082  203620 start.go:925] validating driver "docker" against <nil>
	I1008 15:40:49.294092  203620 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:40:49.294202  203620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:40:49.350784  203620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 15:40:49.340670937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:40:49.350969  203620 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:40:49.351532  203620 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1008 15:40:49.351694  203620 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 15:40:49.353794  203620 out.go:179] * Using Docker driver with root privileges
	I1008 15:40:49.355123  203620 cni.go:84] Creating CNI manager for ""
	I1008 15:40:49.355179  203620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 15:40:49.355186  203620 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:40:49.355246  203620 start.go:349] cluster config:
	{Name:first-127124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-127124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:40:49.356458  203620 out.go:179] * Starting "first-127124" primary control-plane node in "first-127124" cluster
	I1008 15:40:49.357520  203620 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 15:40:49.358588  203620 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 15:40:49.359726  203620 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:40:49.359773  203620 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:40:49.359786  203620 cache.go:58] Caching tarball of preloaded images
	I1008 15:40:49.359814  203620 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 15:40:49.359912  203620 preload.go:233] Found /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:40:49.359924  203620 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:40:49.360301  203620 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/config.json ...
	I1008 15:40:49.360322  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/config.json: {Name:mk3d2e477b3fba845c4a97b88d985ddcb69b4ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:49.379679  203620 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 15:40:49.379689  203620 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 15:40:49.379703  203620 cache.go:232] Successfully downloaded all kic artifacts
	I1008 15:40:49.379728  203620 start.go:360] acquireMachinesLock for first-127124: {Name:mk6b100e4005c01a535f06ac73c5d7db8aa26140 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:40:49.379826  203620 start.go:364] duration metric: took 86.037µs to acquireMachinesLock for "first-127124"
	I1008 15:40:49.379845  203620 start.go:93] Provisioning new machine with config: &{Name:first-127124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-127124 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:40:49.379911  203620 start.go:125] createHost starting for "" (driver="docker")
	I1008 15:40:49.382124  203620 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1008 15:40:49.382335  203620 start.go:159] libmachine.API.Create for "first-127124" (driver="docker")
	I1008 15:40:49.382359  203620 client.go:168] LocalClient.Create starting
	I1008 15:40:49.382416  203620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem
	I1008 15:40:49.382462  203620 main.go:141] libmachine: Decoding PEM data...
	I1008 15:40:49.382487  203620 main.go:141] libmachine: Parsing certificate...
	I1008 15:40:49.382548  203620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem
	I1008 15:40:49.382571  203620 main.go:141] libmachine: Decoding PEM data...
	I1008 15:40:49.382578  203620 main.go:141] libmachine: Parsing certificate...
	I1008 15:40:49.382899  203620 cli_runner.go:164] Run: docker network inspect first-127124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 15:40:49.398995  203620 cli_runner.go:211] docker network inspect first-127124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 15:40:49.399060  203620 network_create.go:284] running [docker network inspect first-127124] to gather additional debugging logs...
	I1008 15:40:49.399078  203620 cli_runner.go:164] Run: docker network inspect first-127124
	W1008 15:40:49.415338  203620 cli_runner.go:211] docker network inspect first-127124 returned with exit code 1
	I1008 15:40:49.415358  203620 network_create.go:287] error running [docker network inspect first-127124]: docker network inspect first-127124: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-127124 not found
	I1008 15:40:49.415382  203620 network_create.go:289] output of [docker network inspect first-127124]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-127124 not found
	
	** /stderr **
	I1008 15:40:49.415526  203620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:40:49.432114  203620 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d818f166888 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:d3:1f:b2:12:0b} reservation:<nil>}
	I1008 15:40:49.432500  203620 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dbc500}
	I1008 15:40:49.432522  203620 network_create.go:124] attempt to create docker network first-127124 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1008 15:40:49.432567  203620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-127124 first-127124
	I1008 15:40:49.486540  203620 network_create.go:108] docker network first-127124 192.168.58.0/24 created
	I1008 15:40:49.486563  203620 kic.go:121] calculated static IP "192.168.58.2" for the "first-127124" container
	I1008 15:40:49.486633  203620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 15:40:49.502772  203620 cli_runner.go:164] Run: docker volume create first-127124 --label name.minikube.sigs.k8s.io=first-127124 --label created_by.minikube.sigs.k8s.io=true
	I1008 15:40:49.520393  203620 oci.go:103] Successfully created a docker volume first-127124
	I1008 15:40:49.520498  203620 cli_runner.go:164] Run: docker run --rm --name first-127124-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-127124 --entrypoint /usr/bin/test -v first-127124:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 15:40:49.912334  203620 oci.go:107] Successfully prepared a docker volume first-127124
	I1008 15:40:49.912372  203620 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:40:49.912391  203620 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 15:40:49.912469  203620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-127124:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 15:40:54.218999  203620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-127124:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306481597s)
	I1008 15:40:54.219025  203620 kic.go:203] duration metric: took 4.306629412s to extract preloaded images to volume ...
	W1008 15:40:54.219118  203620 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1008 15:40:54.219155  203620 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1008 15:40:54.219187  203620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 15:40:54.276605  203620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-127124 --name first-127124 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-127124 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-127124 --network first-127124 --ip 192.168.58.2 --volume first-127124:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 15:40:54.535295  203620 cli_runner.go:164] Run: docker container inspect first-127124 --format={{.State.Running}}
	I1008 15:40:54.552251  203620 cli_runner.go:164] Run: docker container inspect first-127124 --format={{.State.Status}}
	I1008 15:40:54.569213  203620 cli_runner.go:164] Run: docker exec first-127124 stat /var/lib/dpkg/alternatives/iptables
	I1008 15:40:54.621096  203620 oci.go:144] the created container "first-127124" has a running status.
	I1008 15:40:54.621140  203620 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa...
	I1008 15:40:55.474771  203620 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 15:40:55.497370  203620 cli_runner.go:164] Run: docker container inspect first-127124 --format={{.State.Status}}
	I1008 15:40:55.514125  203620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 15:40:55.514138  203620 kic_runner.go:114] Args: [docker exec --privileged first-127124 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 15:40:55.556589  203620 cli_runner.go:164] Run: docker container inspect first-127124 --format={{.State.Status}}
	I1008 15:40:55.571906  203620 machine.go:93] provisionDockerMachine start ...
	I1008 15:40:55.571998  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:55.589245  203620 main.go:141] libmachine: Using SSH client type: native
	I1008 15:40:55.589563  203620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1008 15:40:55.589574  203620 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:40:55.734404  203620 main.go:141] libmachine: SSH cmd err, output: <nil>: first-127124
	
	I1008 15:40:55.734424  203620 ubuntu.go:182] provisioning hostname "first-127124"
	I1008 15:40:55.734522  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:55.751056  203620 main.go:141] libmachine: Using SSH client type: native
	I1008 15:40:55.751268  203620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1008 15:40:55.751277  203620 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-127124 && echo "first-127124" | sudo tee /etc/hostname
	I1008 15:40:55.904908  203620 main.go:141] libmachine: SSH cmd err, output: <nil>: first-127124
	
	I1008 15:40:55.904972  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:55.921478  203620 main.go:141] libmachine: Using SSH client type: native
	I1008 15:40:55.921684  203620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1008 15:40:55.921699  203620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-127124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-127124/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-127124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:40:56.067140  203620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:40:56.067161  203620 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-94984/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-94984/.minikube}
	I1008 15:40:56.067191  203620 ubuntu.go:190] setting up certificates
	I1008 15:40:56.067201  203620 provision.go:84] configureAuth start
	I1008 15:40:56.067252  203620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-127124
	I1008 15:40:56.083600  203620 provision.go:143] copyHostCerts
	I1008 15:40:56.083644  203620 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem, removing ...
	I1008 15:40:56.083651  203620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem
	I1008 15:40:56.083714  203620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/ca.pem (1078 bytes)
	I1008 15:40:56.083796  203620 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem, removing ...
	I1008 15:40:56.083799  203620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem
	I1008 15:40:56.083821  203620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/cert.pem (1123 bytes)
	I1008 15:40:56.083882  203620 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem, removing ...
	I1008 15:40:56.083885  203620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem
	I1008 15:40:56.083906  203620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-94984/.minikube/key.pem (1675 bytes)
	I1008 15:40:56.083952  203620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem org=jenkins.first-127124 san=[127.0.0.1 192.168.58.2 first-127124 localhost minikube]
	I1008 15:40:56.230402  203620 provision.go:177] copyRemoteCerts
	I1008 15:40:56.230462  203620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:40:56.230496  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:56.246556  203620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa Username:docker}
	I1008 15:40:56.348549  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 15:40:56.367153  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 15:40:56.384187  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:40:56.401061  203620 provision.go:87] duration metric: took 333.847735ms to configureAuth
	I1008 15:40:56.401081  203620 ubuntu.go:206] setting minikube options for container-runtime
	I1008 15:40:56.401231  203620 config.go:182] Loaded profile config "first-127124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:40:56.401317  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:56.419782  203620 main.go:141] libmachine: Using SSH client type: native
	I1008 15:40:56.420016  203620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1008 15:40:56.420027  203620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:40:56.678504  203620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:40:56.678524  203620 machine.go:96] duration metric: took 1.106596942s to provisionDockerMachine
	I1008 15:40:56.678534  203620 client.go:171] duration metric: took 7.296170698s to LocalClient.Create
	I1008 15:40:56.678558  203620 start.go:167] duration metric: took 7.296223095s to libmachine.API.Create "first-127124"
	I1008 15:40:56.678565  203620 start.go:293] postStartSetup for "first-127124" (driver="docker")
	I1008 15:40:56.678577  203620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:40:56.678631  203620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:40:56.678662  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:56.694989  203620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa Username:docker}
	I1008 15:40:56.798324  203620 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:40:56.801690  203620 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 15:40:56.801716  203620 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 15:40:56.801726  203620 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/addons for local assets ...
	I1008 15:40:56.801777  203620 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-94984/.minikube/files for local assets ...
	I1008 15:40:56.801842  203620 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem -> 989002.pem in /etc/ssl/certs
	I1008 15:40:56.801926  203620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:40:56.809071  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:40:56.828276  203620 start.go:296] duration metric: took 149.69665ms for postStartSetup
	I1008 15:40:56.828675  203620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-127124
	I1008 15:40:56.845084  203620 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/config.json ...
	I1008 15:40:56.845376  203620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 15:40:56.845422  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:56.861330  203620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa Username:docker}
	I1008 15:40:56.959513  203620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 15:40:56.964065  203620 start.go:128] duration metric: took 7.584135332s to createHost
	I1008 15:40:56.964085  203620 start.go:83] releasing machines lock for "first-127124", held for 7.584251748s
	I1008 15:40:56.964174  203620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-127124
	I1008 15:40:56.980175  203620 ssh_runner.go:195] Run: cat /version.json
	I1008 15:40:56.980189  203620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:40:56.980218  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:56.980240  203620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-127124
	I1008 15:40:56.999302  203620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa Username:docker}
	I1008 15:40:56.999486  203620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/first-127124/id_rsa Username:docker}
	I1008 15:40:57.149534  203620 ssh_runner.go:195] Run: systemctl --version
	I1008 15:40:57.155861  203620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:40:57.190091  203620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:40:57.194698  203620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:40:57.194756  203620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:40:57.220172  203620 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:40:57.220189  203620 start.go:495] detecting cgroup driver to use...
	I1008 15:40:57.220228  203620 detect.go:190] detected "systemd" cgroup driver on host os
	I1008 15:40:57.220276  203620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:40:57.236063  203620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:40:57.247967  203620 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:40:57.248007  203620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:40:57.263888  203620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:40:57.280416  203620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:40:57.358774  203620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:40:57.442082  203620 docker.go:234] disabling docker service ...
	I1008 15:40:57.442147  203620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:40:57.460075  203620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:40:57.472239  203620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:40:57.551795  203620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:40:57.630937  203620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:40:57.643277  203620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:40:57.657469  203620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:40:57.657522  203620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.667883  203620 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 15:40:57.667941  203620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.676839  203620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.685639  203620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.694152  203620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:40:57.701962  203620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.710179  203620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.723557  203620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:40:57.732092  203620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:40:57.739197  203620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:40:57.746217  203620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:40:57.822076  203620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:40:57.922764  203620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:40:57.922815  203620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:40:57.926753  203620 start.go:563] Will wait 60s for crictl version
	I1008 15:40:57.926799  203620 ssh_runner.go:195] Run: which crictl
	I1008 15:40:57.930166  203620 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 15:40:57.954131  203620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 15:40:57.954223  203620 ssh_runner.go:195] Run: crio --version
	I1008 15:40:57.981726  203620 ssh_runner.go:195] Run: crio --version
	I1008 15:40:58.011301  203620 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 15:40:58.012456  203620 cli_runner.go:164] Run: docker network inspect first-127124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 15:40:58.030630  203620 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1008 15:40:58.034872  203620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:40:58.045190  203620 kubeadm.go:883] updating cluster {Name:first-127124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-127124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1008 15:40:58.045328  203620 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:40:58.045382  203620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:40:58.076645  203620 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:40:58.076657  203620 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:40:58.076703  203620 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:40:58.102477  203620 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:40:58.102494  203620 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:40:58.102502  203620 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1008 15:40:58.102584  203620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-127124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-127124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:40:58.102642  203620 ssh_runner.go:195] Run: crio config
	I1008 15:40:58.148387  203620 cni.go:84] Creating CNI manager for ""
	I1008 15:40:58.148403  203620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 15:40:58.148422  203620 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:40:58.148451  203620 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-127124 NodeName:first-127124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:40:58.148586  203620 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-127124"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:40:58.148644  203620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:40:58.156713  203620 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:40:58.156769  203620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:40:58.165007  203620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1008 15:40:58.177524  203620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:40:58.192378  203620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1008 15:40:58.204912  203620 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1008 15:40:58.208861  203620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:40:58.218387  203620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:40:58.294356  203620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:40:58.317908  203620 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124 for IP: 192.168.58.2
	I1008 15:40:58.317922  203620 certs.go:195] generating shared ca certs ...
	I1008 15:40:58.317938  203620 certs.go:227] acquiring lock for ca certs: {Name:mk21aed20e9c295fb9c879cea181c036decc27b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.318111  203620 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key
	I1008 15:40:58.318174  203620 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key
	I1008 15:40:58.318183  203620 certs.go:257] generating profile certs ...
	I1008 15:40:58.318246  203620 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/client.key
	I1008 15:40:58.318268  203620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/client.crt with IP's: []
	I1008 15:40:58.486974  203620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/client.crt ...
	I1008 15:40:58.486994  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/client.crt: {Name:mkd9179b6cac0fb379f361f128219785b3b12fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.487188  203620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/client.key ...
	I1008 15:40:58.487195  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/client.key: {Name:mka81103e7849eca6e00b90174f9dcdd7b420e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.487330  203620 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.key.d50733bb
	I1008 15:40:58.487343  203620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.crt.d50733bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1008 15:40:58.706103  203620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.crt.d50733bb ...
	I1008 15:40:58.706122  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.crt.d50733bb: {Name:mkb3f7493886813604a59824a80f463e9a5309ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.706286  203620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.key.d50733bb ...
	I1008 15:40:58.706295  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.key.d50733bb: {Name:mk414cdadfce0cfe0a5014858c41b47bfbf57576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.706367  203620 certs.go:382] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.crt.d50733bb -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.crt
	I1008 15:40:58.706483  203620 certs.go:386] copying /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.key.d50733bb -> /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.key
	I1008 15:40:58.706541  203620 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.key
	I1008 15:40:58.706552  203620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.crt with IP's: []
	I1008 15:40:58.777658  203620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.crt ...
	I1008 15:40:58.777677  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.crt: {Name:mkbfb5b5a27bd6f889b461507a760e625ae3546d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.777838  203620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.key ...
	I1008 15:40:58.777844  203620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.key: {Name:mk539e33d6f76495ac7987a0f7a5a823285b0743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:40:58.778017  203620 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem (1338 bytes)
	W1008 15:40:58.778046  203620 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900_empty.pem, impossibly tiny 0 bytes
	I1008 15:40:58.778051  203620 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 15:40:58.778075  203620 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/ca.pem (1078 bytes)
	I1008 15:40:58.778093  203620 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:40:58.778111  203620 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/certs/key.pem (1675 bytes)
	I1008 15:40:58.778145  203620 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem (1708 bytes)
	I1008 15:40:58.778793  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:40:58.797239  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:40:58.814414  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:40:58.831615  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:40:58.848807  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 15:40:58.865908  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:40:58.882931  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:40:58.900438  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/first-127124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 15:40:58.917854  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/ssl/certs/989002.pem --> /usr/share/ca-certificates/989002.pem (1708 bytes)
	I1008 15:40:58.937168  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:40:58.954533  203620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-94984/.minikube/certs/98900.pem --> /usr/share/ca-certificates/98900.pem (1338 bytes)
	I1008 15:40:58.971293  203620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:40:58.983353  203620 ssh_runner.go:195] Run: openssl version
	I1008 15:40:58.989152  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989002.pem && ln -fs /usr/share/ca-certificates/989002.pem /etc/ssl/certs/989002.pem"
	I1008 15:40:58.997247  203620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989002.pem
	I1008 15:40:59.000862  203620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:35 /usr/share/ca-certificates/989002.pem
	I1008 15:40:59.000900  203620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989002.pem
	I1008 15:40:59.035252  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/989002.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:40:59.044085  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:40:59.053279  203620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:40:59.057893  203620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:40:59.057953  203620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:40:59.095033  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:40:59.103922  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98900.pem && ln -fs /usr/share/ca-certificates/98900.pem /etc/ssl/certs/98900.pem"
	I1008 15:40:59.112273  203620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98900.pem
	I1008 15:40:59.115927  203620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:35 /usr/share/ca-certificates/98900.pem
	I1008 15:40:59.115971  203620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98900.pem
	I1008 15:40:59.150197  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/98900.pem /etc/ssl/certs/51391683.0"
	I1008 15:40:59.158952  203620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:40:59.162399  203620 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 15:40:59.162455  203620 kubeadm.go:400] StartCluster: {Name:first-127124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-127124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1008 15:40:59.162530  203620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:40:59.162575  203620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:40:59.188179  203620 cri.go:89] found id: ""
	I1008 15:40:59.188236  203620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:40:59.196088  203620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:40:59.203726  203620 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:40:59.203773  203620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:40:59.211120  203620 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:40:59.211130  203620 kubeadm.go:157] found existing configuration files:
	
	I1008 15:40:59.211177  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:40:59.218298  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:40:59.218339  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:40:59.225624  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:40:59.233114  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:40:59.233156  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:40:59.240214  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:40:59.247425  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:40:59.247494  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:40:59.254470  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:40:59.261601  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:40:59.261641  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:40:59.268982  203620 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:40:59.306894  203620 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:40:59.306945  203620 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:40:59.327833  203620 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:40:59.327896  203620 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:40:59.327947  203620 kubeadm.go:318] OS: Linux
	I1008 15:40:59.328005  203620 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:40:59.328063  203620 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:40:59.328111  203620 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:40:59.328162  203620 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:40:59.328216  203620 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:40:59.328278  203620 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:40:59.328342  203620 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:40:59.328381  203620 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:40:59.383907  203620 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:40:59.383998  203620 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:40:59.384097  203620 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:40:59.390638  203620 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:40:59.393563  203620 out.go:252]   - Generating certificates and keys ...
	I1008 15:40:59.393638  203620 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:40:59.393703  203620 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:41:00.013864  203620 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 15:41:00.045558  203620 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 15:41:00.101148  203620 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 15:41:00.474299  203620 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 15:41:00.652819  203620 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 15:41:00.652940  203620 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-127124 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1008 15:41:01.113922  203620 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 15:41:01.114079  203620 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-127124 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1008 15:41:01.249062  203620 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 15:41:01.370953  203620 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 15:41:01.444230  203620 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 15:41:01.444335  203620 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:41:01.570468  203620 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:41:01.780944  203620 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:41:01.964917  203620 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:41:02.546067  203620 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:41:02.872888  203620 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:41:02.873528  203620 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:41:02.878866  203620 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:41:02.880401  203620 out.go:252]   - Booting up control plane ...
	I1008 15:41:02.880510  203620 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:41:02.880579  203620 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:41:02.881265  203620 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:41:02.895162  203620 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:41:02.895271  203620 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:41:02.901689  203620 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:41:02.902005  203620 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:41:02.902043  203620 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:41:02.999578  203620 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:41:02.999700  203620 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:41:04.000557  203620 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001152758s
	I1008 15:41:04.004577  203620 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:41:04.004718  203620 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1008 15:41:04.004931  203620 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:41:04.005068  203620 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:45:04.005639  203620 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000161086s
	I1008 15:45:04.005930  203620 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000257177s
	I1008 15:45:04.006183  203620 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000615451s
	I1008 15:45:04.006197  203620 kubeadm.go:318] 
	I1008 15:45:04.006427  203620 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:45:04.006680  203620 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:45:04.006936  203620 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:45:04.007142  203620 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:45:04.007323  203620 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:45:04.007509  203620 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:45:04.007517  203620 kubeadm.go:318] 
	I1008 15:45:04.010384  203620 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:45:04.010577  203620 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:45:04.011235  203620 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 15:45:04.011297  203620 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1008 15:45:04.011473  203620 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-127124 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-127124 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001152758s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000161086s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000257177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000615451s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 15:45:04.011557  203620 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 15:45:04.451535  203620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:45:04.463990  203620 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 15:45:04.464038  203620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:45:04.471870  203620 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:45:04.471883  203620 kubeadm.go:157] found existing configuration files:
	
	I1008 15:45:04.471939  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:45:04.479812  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:45:04.479872  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:45:04.487153  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:45:04.494756  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:45:04.494801  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:45:04.502385  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:45:04.510059  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:45:04.510175  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:45:04.517534  203620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:45:04.525265  203620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:45:04.525313  203620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:45:04.532684  203620 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 15:45:04.589766  203620 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1008 15:45:04.648030  203620 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 15:49:06.780037  203620 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 15:49:06.780165  203620 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 15:49:06.782464  203620 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 15:49:06.782552  203620 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 15:49:06.782646  203620 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 15:49:06.782707  203620 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1008 15:49:06.782734  203620 kubeadm.go:318] OS: Linux
	I1008 15:49:06.782772  203620 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 15:49:06.782807  203620 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 15:49:06.782854  203620 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 15:49:06.782890  203620 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 15:49:06.782933  203620 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 15:49:06.782973  203620 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 15:49:06.783013  203620 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 15:49:06.783049  203620 kubeadm.go:318] CGROUPS_IO: enabled
	I1008 15:49:06.783118  203620 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 15:49:06.783220  203620 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 15:49:06.783308  203620 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 15:49:06.783363  203620 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 15:49:06.785793  203620 out.go:252]   - Generating certificates and keys ...
	I1008 15:49:06.785856  203620 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 15:49:06.785916  203620 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 15:49:06.785990  203620 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 15:49:06.786038  203620 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 15:49:06.786094  203620 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 15:49:06.786143  203620 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 15:49:06.786195  203620 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 15:49:06.786240  203620 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 15:49:06.786300  203620 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 15:49:06.786359  203620 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 15:49:06.786387  203620 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 15:49:06.786429  203620 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 15:49:06.786499  203620 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 15:49:06.786549  203620 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 15:49:06.786592  203620 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 15:49:06.786651  203620 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 15:49:06.786696  203620 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 15:49:06.786767  203620 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 15:49:06.786828  203620 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 15:49:06.788127  203620 out.go:252]   - Booting up control plane ...
	I1008 15:49:06.788196  203620 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 15:49:06.788275  203620 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 15:49:06.788342  203620 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 15:49:06.788433  203620 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 15:49:06.788539  203620 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 15:49:06.788626  203620 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 15:49:06.788690  203620 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 15:49:06.788719  203620 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 15:49:06.788824  203620 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 15:49:06.788909  203620 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 15:49:06.788953  203620 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001794957s
	I1008 15:49:06.789032  203620 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 15:49:06.789097  203620 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1008 15:49:06.789181  203620 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 15:49:06.789246  203620 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 15:49:06.789316  203620 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000074675s
	I1008 15:49:06.789374  203620 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00032273s
	I1008 15:49:06.789466  203620 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000503503s
	I1008 15:49:06.789474  203620 kubeadm.go:318] 
	I1008 15:49:06.789548  203620 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 15:49:06.789613  203620 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 15:49:06.789678  203620 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 15:49:06.789756  203620 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 15:49:06.789812  203620 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 15:49:06.789879  203620 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 15:49:06.789888  203620 kubeadm.go:318] 
	I1008 15:49:06.789943  203620 kubeadm.go:402] duration metric: took 8m7.627502953s to StartCluster
	I1008 15:49:06.789995  203620 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 15:49:06.790053  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 15:49:06.819433  203620 cri.go:89] found id: ""
	I1008 15:49:06.819481  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.819489  203620 logs.go:284] No container was found matching "kube-apiserver"
	I1008 15:49:06.819495  203620 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 15:49:06.819542  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 15:49:06.845638  203620 cri.go:89] found id: ""
	I1008 15:49:06.845659  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.845668  203620 logs.go:284] No container was found matching "etcd"
	I1008 15:49:06.845676  203620 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 15:49:06.845741  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 15:49:06.871951  203620 cri.go:89] found id: ""
	I1008 15:49:06.871968  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.871974  203620 logs.go:284] No container was found matching "coredns"
	I1008 15:49:06.871979  203620 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 15:49:06.872024  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 15:49:06.897797  203620 cri.go:89] found id: ""
	I1008 15:49:06.897817  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.897823  203620 logs.go:284] No container was found matching "kube-scheduler"
	I1008 15:49:06.897828  203620 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 15:49:06.897873  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 15:49:06.924263  203620 cri.go:89] found id: ""
	I1008 15:49:06.924280  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.924286  203620 logs.go:284] No container was found matching "kube-proxy"
	I1008 15:49:06.924292  203620 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 15:49:06.924342  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 15:49:06.950002  203620 cri.go:89] found id: ""
	I1008 15:49:06.950017  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.950023  203620 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 15:49:06.950028  203620 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 15:49:06.950078  203620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 15:49:06.975429  203620 cri.go:89] found id: ""
	I1008 15:49:06.975459  203620 logs.go:282] 0 containers: []
	W1008 15:49:06.975469  203620 logs.go:284] No container was found matching "kindnet"
	I1008 15:49:06.975480  203620 logs.go:123] Gathering logs for kubelet ...
	I1008 15:49:06.975496  203620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 15:49:07.042908  203620 logs.go:123] Gathering logs for dmesg ...
	I1008 15:49:07.042930  203620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 15:49:07.056670  203620 logs.go:123] Gathering logs for describe nodes ...
	I1008 15:49:07.056690  203620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 15:49:07.115109  203620 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:49:07.108634    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.109163    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.110673    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.111083    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.112134    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 15:49:07.108634    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.109163    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.110673    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.111083    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:07.112134    2398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 15:49:07.115137  203620 logs.go:123] Gathering logs for CRI-O ...
	I1008 15:49:07.115155  203620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 15:49:07.172479  203620 logs.go:123] Gathering logs for container status ...
	I1008 15:49:07.172503  203620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 15:49:07.201952  203620 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794957s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000074675s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032273s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000503503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 15:49:07.202011  203620 out.go:285] * 
	W1008 15:49:07.202122  203620 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794957s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000074675s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032273s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000503503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:49:07.202145  203620 out.go:285] * 
	W1008 15:49:07.204111  203620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 15:49:07.207711  203620 out.go:203] 
	W1008 15:49:07.208735  203620 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001794957s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000074675s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00032273s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000503503s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 15:49:07.208755  203620 out.go:285] * 
	I1008 15:49:07.210103  203620 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 15:48:56 first-127124 crio[779]: time="2025-10-08T15:48:56.440762005Z" level=info msg="createCtr: deleting container ada9a72b4e3d04179d158638ba06d429f707e1b23ee2c8a9b96d681813a11cc6 from storage" id=cfbfb272-5e54-4bdc-bfe9-ea1c30cbe68e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:48:56 first-127124 crio[779]: time="2025-10-08T15:48:56.444384806Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-127124_kube-system_fd123780c473bdfff990e1583bdb8aa4_0" id=f3f9a161-a5df-491b-a660-ed2f1ee9e705 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:48:56 first-127124 crio[779]: time="2025-10-08T15:48:56.444731083Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-127124_kube-system_cd416f23a320c046811acad7a7dbb302_0" id=cfbfb272-5e54-4bdc-bfe9-ea1c30cbe68e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.410941328Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=4b270986-7117-43ed-8b06-630f4299bb4a name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.411873902Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=90a51bc0-7101-44c5-b571-df38ffe53beb name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.412809357Z" level=info msg="Creating container: kube-system/etcd-first-127124/etcd" id=aba9b12c-bb92-4867-b75a-79ccd6700cbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.413093597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.416298009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.416884059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.433481379Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=aba9b12c-bb92-4867-b75a-79ccd6700cbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.434831898Z" level=info msg="createCtr: deleting container ID 0161c015526a9773c168d42bf51d3ee7c100f95ae613af55b78274fea993d452 from idIndex" id=aba9b12c-bb92-4867-b75a-79ccd6700cbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.43487072Z" level=info msg="createCtr: removing container 0161c015526a9773c168d42bf51d3ee7c100f95ae613af55b78274fea993d452" id=aba9b12c-bb92-4867-b75a-79ccd6700cbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.434914349Z" level=info msg="createCtr: deleting container 0161c015526a9773c168d42bf51d3ee7c100f95ae613af55b78274fea993d452 from storage" id=aba9b12c-bb92-4867-b75a-79ccd6700cbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:02 first-127124 crio[779]: time="2025-10-08T15:49:02.437066244Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-127124_kube-system_a86b4ad678e99412cc93d9a12c0db904_0" id=aba9b12c-bb92-4867-b75a-79ccd6700cbc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.410758197Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fc6af4dc-1bd7-438c-90dc-3aaffa177b56 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.411626318Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=934e4b5d-3d66-4fe0-8360-bfbaffe9d305 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.412520766Z" level=info msg="Creating container: kube-system/kube-controller-manager-first-127124/kube-controller-manager" id=eea76e27-1555-4f14-bf9e-732831f2ca2e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.412744264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.415956484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.416356129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.431283046Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=eea76e27-1555-4f14-bf9e-732831f2ca2e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.432682586Z" level=info msg="createCtr: deleting container ID 4869106c66c9cef83ffb4cd5735b871fc4466af63297ca29f99c22a921c1b688 from idIndex" id=eea76e27-1555-4f14-bf9e-732831f2ca2e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.432720499Z" level=info msg="createCtr: removing container 4869106c66c9cef83ffb4cd5735b871fc4466af63297ca29f99c22a921c1b688" id=eea76e27-1555-4f14-bf9e-732831f2ca2e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.432760084Z" level=info msg="createCtr: deleting container 4869106c66c9cef83ffb4cd5735b871fc4466af63297ca29f99c22a921c1b688 from storage" id=eea76e27-1555-4f14-bf9e-732831f2ca2e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 15:49:05 first-127124 crio[779]: time="2025-10-08T15:49:05.434886485Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-first-127124_kube-system_366e08cbbd0942734b2524e552023c9b_0" id=eea76e27-1555-4f14-bf9e-732831f2ca2e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 15:49:08.303394    2543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:08.303931    2543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:08.305511    2543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:08.305969    2543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 15:49:08.307429    2543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 12:17] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 15:49:08 up  3:31,  0 user,  load average: 0.08, 0.26, 0.28
	Linux first-127124 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 15:48:56 first-127124 kubelet[1797]:         container kube-scheduler start failed in pod kube-scheduler-first-127124_kube-system(cd416f23a320c046811acad7a7dbb302): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:48:56 first-127124 kubelet[1797]:  > logger="UnhandledError"
	Oct 08 15:48:56 first-127124 kubelet[1797]: E1008 15:48:56.445904    1797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-127124" podUID="cd416f23a320c046811acad7a7dbb302"
	Oct 08 15:49:02 first-127124 kubelet[1797]: E1008 15:49:02.344484    1797 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.58.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dfirst-127124&limit=500&resourceVersion=0\": dial tcp 192.168.58.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 15:49:02 first-127124 kubelet[1797]: E1008 15:49:02.410493    1797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-127124\" not found" node="first-127124"
	Oct 08 15:49:02 first-127124 kubelet[1797]: E1008 15:49:02.437402    1797 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:49:02 first-127124 kubelet[1797]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:49:02 first-127124 kubelet[1797]:  > podSandboxID="d47e83e5fee9b54d59527fe220e24805c0506e628beded73a1dd8ec6356d0e41"
	Oct 08 15:49:02 first-127124 kubelet[1797]: E1008 15:49:02.437535    1797 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:49:02 first-127124 kubelet[1797]:         container etcd start failed in pod etcd-first-127124_kube-system(a86b4ad678e99412cc93d9a12c0db904): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:49:02 first-127124 kubelet[1797]:  > logger="UnhandledError"
	Oct 08 15:49:02 first-127124 kubelet[1797]: E1008 15:49:02.437582    1797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-127124" podUID="a86b4ad678e99412cc93d9a12c0db904"
	Oct 08 15:49:03 first-127124 kubelet[1797]: E1008 15:49:03.029621    1797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-127124?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 08 15:49:03 first-127124 kubelet[1797]: I1008 15:49:03.191104    1797 kubelet_node_status.go:75] "Attempting to register node" node="first-127124"
	Oct 08 15:49:03 first-127124 kubelet[1797]: E1008 15:49:03.191548    1797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-127124"
	Oct 08 15:49:05 first-127124 kubelet[1797]: E1008 15:49:05.410283    1797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-127124\" not found" node="first-127124"
	Oct 08 15:49:05 first-127124 kubelet[1797]: E1008 15:49:05.435180    1797 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 15:49:05 first-127124 kubelet[1797]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:49:05 first-127124 kubelet[1797]:  > podSandboxID="080fe4f8109a70b4447ca4eafa5af4598ebb6e1c5a9cc6ce21002f07e1ebfcb9"
	Oct 08 15:49:05 first-127124 kubelet[1797]: E1008 15:49:05.435274    1797 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 15:49:05 first-127124 kubelet[1797]:         container kube-controller-manager start failed in pod kube-controller-manager-first-127124_kube-system(366e08cbbd0942734b2524e552023c9b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 15:49:05 first-127124 kubelet[1797]:  > logger="UnhandledError"
	Oct 08 15:49:05 first-127124 kubelet[1797]: E1008 15:49:05.435303    1797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-first-127124" podUID="366e08cbbd0942734b2524e552023c9b"
	Oct 08 15:49:06 first-127124 kubelet[1797]: E1008 15:49:06.124388    1797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-127124.186c8e90abb98c7f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-127124,UID:first-127124,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-127124 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-127124,},FirstTimestamp:2025-10-08 15:45:06.402520191 +0000 UTC m=+0.628143482,LastTimestamp:2025-10-08 15:45:06.402520191 +0000 UTC m=+0.628143482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-127124,}"
	Oct 08 15:49:06 first-127124 kubelet[1797]: E1008 15:49:06.422104    1797 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-127124\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-127124 -n first-127124
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-127124 -n first-127124: exit status 6 (295.118114ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 15:49:08.684832  209030 status.go:458] kubeconfig endpoint: get endpoint: "first-127124" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-127124" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-127124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-127124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-127124: (1.896683116s)
--- FAIL: TestMinikubeProfile (501.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.063s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-886131
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-886131-m01 --driver=docker  --container-runtime=crio
E1008 16:13:50.000513   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 16:17:26.912089   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m28s)
		TestMultiNode/serial (28m28s)
		TestMultiNode/serial/ValidateNameConflict (5m23s)

                                                
                                                
goroutine 2071 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.(*T).Run(0xc000505180, {0x32044ee?, 0xc00072ba88?}, 0x3c52d60)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc000505180)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc000505180, 0xc00072bbc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc000594108, {0x5c636c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc00045fc70?, 0x5c8bdc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000714fa0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000714fa0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 151 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0014fa540)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fa540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertOptions(0xc0014fa540)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0xb3
testing.tRunner(0xc0014fa540, 0x3c52c78)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 124 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000208540)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000208540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestOffline(0xc000208540)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000208540, 0x3c52d78)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 648 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008e5810, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc000a40ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc5360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014e0f00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x42b6bc?, 0x5cb35a0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3faf870?, 0xc0000844d0?}, 0x41b265?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3faf870, 0xc0000844d0}, 0xc000a40f50, {0x3f66880, 0xc000720000}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f66880?, 0xc000720000?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000622020, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 639
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 152 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0014fac40)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fac40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertExpiration(0xc0014fac40)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0014fac40, 0x3c52c70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 155 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0014fb180)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fb180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0014fb180)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0xb3
testing.tRunner(0xc0014fb180, 0x3c52cb8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 154 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0014fafc0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fafc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0014fafc0)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0xb3
testing.tRunner(0xc0014fafc0, 0x3c52cc0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 220 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7042b0b56738, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0002e0000?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0002e0000)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0002e0000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000996700)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc000996700)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0001fe500, {0x3f9cdd0, 0xc000996700})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0001fe500)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 217
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 157 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0014fb6c0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fb6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc0014fb6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0xb3
testing.tRunner(0xc0014fb6c0, 0x3c52d08)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1843 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0014f9180, {0x31f4138?, 0x1a3185c5000?}, 0xc001530810)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc0014f9180)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x3c5
testing.tRunner(0xc0014f9180, 0x3c52d60)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 639 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0014e0f00, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 401
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 746 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00152c300, 0xc000085500)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 392
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 638 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc1f60, {{0x3fb6f88, 0xc0002483c0?}, 0xc00031e620?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 401
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 1985 [syscall, 5 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xe, 0xc000729a08, 0x4, 0xc00179e1b0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc000729a36?, 0xc000729b60?, 0x5930ab?, 0x7ffcf84261ad?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc000594258?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0x5c8e460?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0014fc180)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc0014fc180)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014f8a80, 0xc0014fc180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3faf4f0, 0xc00035e8c0}, 0xc0014f8a80, {0xc0004a4b60, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc0014f8a80?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc0014f8a80, 0xc0008e4000)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1834
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 650 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 649
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 503 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001528a80, 0xc001696540)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 502
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 583 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001be780, 0xc000085260)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 582
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 1834 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00166c700, {0x321907a?, 0x4097904?}, 0xc0008e4000)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc00166c700)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc00166c700, 0xc001530810)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 649 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faf870, 0xc0000844d0}, 0xc001522f50, 0xc001522f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3faf870, 0xc0000844d0}, 0xc0?, 0xc001522f50, 0xc001522f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faf870?, 0xc0000844d0?}, 0xc0009f8700?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc0014fc180?, 0xc00140c1c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 639
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2102 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x7042b0b56508, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001bba300?, 0xc0008fea8f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001bba300, {0xc0008fea8f, 0x571, 0x571})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00051e258, {0xc0008fea8f?, 0x41835f?, 0x2c43f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0009fa210, {0x3f64c80, 0xc0009ac038})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f64e00, 0xc0009fa210}, {0x3f64c80, 0xc0009ac038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00051e258?, {0x3f64e00, 0xc0009fa210})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00051e258, {0x3f64e00, 0xc0009fa210})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f64e00, 0xc0009fa210}, {0x3f64d00, 0xc00051e258}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00140c1c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1985
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2103 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7042b0b56850, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001bba3c0?, 0xc001a7d76a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001bba3c0, {0xc001a7d76a, 0x896, 0x896})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00051e698, {0xc001a7d76a?, 0x41835f?, 0x2c43f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0009fa240, {0x3f64c80, 0xc000476040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f64e00, 0xc0009fa240}, {0x3f64c80, 0xc000476040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00051e698?, {0x3f64e00, 0xc0009fa240})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00051e698, {0x3f64e00, 0xc0009fa240})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f64e00, 0xc0009fa240}, {0x3f64d00, 0xc00051e698}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000780180?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1985
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2104 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014fc180, 0xc0017861c0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 1985
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.32
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.59
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
39 TestErrorSpam/start 0.62
40 TestErrorSpam/status 0.87
41 TestErrorSpam/pause 1.33
42 TestErrorSpam/unpause 1.33
43 TestErrorSpam/stop 1.39
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
55 TestFunctional/serial/CacheCmd/cache/add_local 1.6
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
60 TestFunctional/serial/CacheCmd/cache/delete 0.1
65 TestFunctional/serial/LogsCmd 0.92
66 TestFunctional/serial/LogsFileCmd 0.94
69 TestFunctional/parallel/ConfigCmd 0.39
71 TestFunctional/parallel/DryRun 0.42
72 TestFunctional/parallel/InternationalLanguage 0.18
78 TestFunctional/parallel/AddonsCmd 0.15
81 TestFunctional/parallel/SSHCmd 0.66
82 TestFunctional/parallel/CpCmd 1.99
84 TestFunctional/parallel/FileSync 0.28
85 TestFunctional/parallel/CertSync 2
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
93 TestFunctional/parallel/License 0.51
97 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
100 TestFunctional/parallel/ProfileCmd/profile_list 0.43
102 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
104 TestFunctional/parallel/Version/short 0.05
105 TestFunctional/parallel/Version/components 0.53
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.44
111 TestFunctional/parallel/ImageCommands/Setup 1.58
112 TestFunctional/parallel/MountCmd/specific-port 1.95
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
120 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.48
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.45
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.22
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.21
188 TestKicCustomNetwork/create_custom_network 29.14
189 TestKicCustomNetwork/use_default_bridge_network 27.28
190 TestKicExistingNetwork 25.05
191 TestKicCustomSubnet 28.64
192 TestKicStaticIP 25.67
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 6.14
198 TestMountStart/serial/VerifyMountFirst 0.26
199 TestMountStart/serial/StartWithMountSecond 5.41
200 TestMountStart/serial/VerifyMountSecond 0.26
201 TestMountStart/serial/DeleteFirst 1.64
202 TestMountStart/serial/VerifyMountPostDelete 0.26
203 TestMountStart/serial/Stop 1.2
204 TestMountStart/serial/RestartStopped 7.96
205 TestMountStart/serial/VerifyMountPostStop 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (7.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-211325 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-211325 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.321919845s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1008 14:18:11.121710   98900 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1008 14:18:11.121848   98900 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-211325
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-211325: exit status 85 (62.082411ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-211325 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-211325 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:18:03
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:18:03.842764   98912 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:18:03.843052   98912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:18:03.843064   98912 out.go:374] Setting ErrFile to fd 2...
	I1008 14:18:03.843071   98912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:18:03.843305   98912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	W1008 14:18:03.843468   98912 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21681-94984/.minikube/config/config.json: open /home/jenkins/minikube-integration/21681-94984/.minikube/config/config.json: no such file or directory
	I1008 14:18:03.844138   98912 out.go:368] Setting JSON to true
	I1008 14:18:03.845050   98912 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7235,"bootTime":1759925849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:18:03.845154   98912 start.go:141] virtualization: kvm guest
	I1008 14:18:03.847429   98912 out.go:99] [download-only-211325] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1008 14:18:03.847600   98912 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 14:18:03.847647   98912 notify.go:220] Checking for updates...
	I1008 14:18:03.849030   98912 out.go:171] MINIKUBE_LOCATION=21681
	I1008 14:18:03.850352   98912 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:18:03.851626   98912 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:18:03.852905   98912 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:18:03.854305   98912 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 14:18:03.856532   98912 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 14:18:03.856795   98912 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:18:03.879229   98912 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:18:03.879340   98912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:18:04.296302   98912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-08 14:18:04.284334654 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:18:04.296423   98912 docker.go:318] overlay module found
	I1008 14:18:04.298429   98912 out.go:99] Using the docker driver based on user configuration
	I1008 14:18:04.298474   98912 start.go:305] selected driver: docker
	I1008 14:18:04.298486   98912 start.go:925] validating driver "docker" against <nil>
	I1008 14:18:04.298577   98912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:18:04.358664   98912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-08 14:18:04.349682836 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:18:04.358825   98912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:18:04.359336   98912 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1008 14:18:04.359577   98912 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 14:18:04.361401   98912 out.go:171] Using Docker driver with root privileges
	I1008 14:18:04.362678   98912 cni.go:84] Creating CNI manager for ""
	I1008 14:18:04.362746   98912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 14:18:04.362763   98912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 14:18:04.362853   98912 start.go:349] cluster config:
	{Name:download-only-211325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-211325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:18:04.364334   98912 out.go:99] Starting "download-only-211325" primary control-plane node in "download-only-211325" cluster
	I1008 14:18:04.364374   98912 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 14:18:04.365654   98912 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1008 14:18:04.365700   98912 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 14:18:04.365819   98912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 14:18:04.381738   98912 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 14:18:04.381910   98912 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1008 14:18:04.382006   98912 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 14:18:04.388607   98912 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1008 14:18:04.388632   98912 cache.go:58] Caching tarball of preloaded images
	I1008 14:18:04.388770   98912 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 14:18:04.390662   98912 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1008 14:18:04.390686   98912 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1008 14:18:04.415130   98912 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1008 14:18:04.415279   98912 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-211325 host does not exist
	  To start a cluster, run: "minikube start -p download-only-211325"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-211325
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-840888 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-840888 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.59271073s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1008 14:18:16.117002   98900 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1008 14:18:16.117068   98900 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-94984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-840888
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-840888: exit status 85 (61.853489ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-211325 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-211325 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ delete  │ -p download-only-211325                                                                                                                                                   │ download-only-211325 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │ 08 Oct 25 14:18 UTC │
	│ start   │ -o=json --download-only -p download-only-840888 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-840888 │ jenkins │ v1.37.0 │ 08 Oct 25 14:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:18:11
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:18:11.566026   99272 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:18:11.566323   99272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:18:11.566334   99272 out.go:374] Setting ErrFile to fd 2...
	I1008 14:18:11.566338   99272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:18:11.566603   99272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 14:18:11.567157   99272 out.go:368] Setting JSON to true
	I1008 14:18:11.568061   99272 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7243,"bootTime":1759925849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:18:11.568177   99272 start.go:141] virtualization: kvm guest
	I1008 14:18:11.570046   99272 out.go:99] [download-only-840888] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:18:11.570224   99272 notify.go:220] Checking for updates...
	I1008 14:18:11.571323   99272 out.go:171] MINIKUBE_LOCATION=21681
	I1008 14:18:11.572803   99272 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:18:11.574003   99272 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 14:18:11.575197   99272 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 14:18:11.576531   99272 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 14:18:11.579004   99272 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 14:18:11.579222   99272 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:18:11.600277   99272 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 14:18:11.600369   99272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:18:11.655676   99272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:53 SystemTime:2025-10-08 14:18:11.646125414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:18:11.655790   99272 docker.go:318] overlay module found
	I1008 14:18:11.657595   99272 out.go:99] Using the docker driver based on user configuration
	I1008 14:18:11.657633   99272 start.go:305] selected driver: docker
	I1008 14:18:11.657639   99272 start.go:925] validating driver "docker" against <nil>
	I1008 14:18:11.657740   99272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 14:18:11.712204   99272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:53 SystemTime:2025-10-08 14:18:11.703082083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 14:18:11.712361   99272 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:18:11.712915   99272 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1008 14:18:11.713053   99272 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 14:18:11.714987   99272 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-840888 host does not exist
	  To start a cluster, run: "minikube start -p download-only-840888"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-840888
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-250844 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-250844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-250844
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 14:18:17.171811   98900 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-198013 --alsologtostderr --binary-mirror http://127.0.0.1:41765 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-198013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-198013
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-541206
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-541206: exit status 85 (60.202599ms)

                                                
                                                
-- stdout --
	* Profile "addons-541206" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-541206"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-541206
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-541206: exit status 85 (63.249256ms)

                                                
                                                
-- stdout --
	* Profile "addons-541206" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-541206"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status: exit status 6 (286.551535ms)

                                                
                                                
-- stdout --
	nospam-526605
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:35:12.483956  111036 status.go:458] kubeconfig endpoint: get endpoint: "nospam-526605" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status: exit status 6 (285.776797ms)

                                                
                                                
-- stdout --
	nospam-526605
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:35:12.770636  111154 status.go:458] kubeconfig endpoint: get endpoint: "nospam-526605" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status: exit status 6 (291.99573ms)

                                                
                                                
-- stdout --
	nospam-526605
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 14:35:13.061416  111269 status.go:458] kubeconfig endpoint: get endpoint: "nospam-526605" does not appear in /home/jenkins/minikube-integration/21681-94984/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 stop: (1.209304589s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-526605 --log_dir /tmp/nospam-526605 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21681-94984/.minikube/files/etc/test/nested/copy/98900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-367186 /tmp/TestFunctionalserialCacheCmdcacheadd_local2182241014/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cache add minikube-local-cache-test:functional-367186
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-367186 cache add minikube-local-cache-test:functional-367186: (1.265349666s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cache delete minikube-local-cache-test:functional-367186
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-367186
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.143856ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs
--- PASS: TestFunctional/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 logs --file /tmp/TestFunctionalserialLogsFileCmd928959794/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 config get cpus: exit status 14 (74.909921ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 config get cpus: exit status 14 (60.301008ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.35893ms)

                                                
                                                
-- stdout --
	* [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:02:30.800174  146704 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:02:30.800470  146704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:30.800482  146704 out.go:374] Setting ErrFile to fd 2...
	I1008 15:02:30.800486  146704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:30.800681  146704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:02:30.801180  146704 out.go:368] Setting JSON to false
	I1008 15:02:30.802320  146704 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9902,"bootTime":1759925849,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:02:30.802437  146704 start.go:141] virtualization: kvm guest
	I1008 15:02:30.804816  146704 out.go:179] * [functional-367186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:02:30.806419  146704 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:02:30.806421  146704 notify.go:220] Checking for updates...
	I1008 15:02:30.809042  146704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:02:30.810328  146704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:02:30.811581  146704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:02:30.814364  146704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:02:30.815633  146704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:02:30.817058  146704 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:30.817598  146704 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:02:30.846434  146704 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:02:30.846665  146704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:02:30.919075  146704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:30.905559226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:02:30.919224  146704 docker.go:318] overlay module found
	I1008 15:02:30.920878  146704 out.go:179] * Using the docker driver based on existing profile
	I1008 15:02:30.922073  146704 start.go:305] selected driver: docker
	I1008 15:02:30.922095  146704 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:02:30.922208  146704 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:02:30.924264  146704 out.go:203] 
	W1008 15:02:30.925665  146704 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 15:02:30.927046  146704 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-367186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-367186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (179.516633ms)

                                                
                                                
-- stdout --
	* [functional-367186] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:02:31.228491  146984 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:02:31.228757  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.228769  146984 out.go:374] Setting ErrFile to fd 2...
	I1008 15:02:31.228775  146984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:02:31.229092  146984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
	I1008 15:02:31.229608  146984 out.go:368] Setting JSON to false
	I1008 15:02:31.230544  146984 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9902,"bootTime":1759925849,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:02:31.230642  146984 start.go:141] virtualization: kvm guest
	I1008 15:02:31.232608  146984 out.go:179] * [functional-367186] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1008 15:02:31.234774  146984 notify.go:220] Checking for updates...
	I1008 15:02:31.234788  146984 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:02:31.236372  146984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:02:31.237980  146984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig
	I1008 15:02:31.239532  146984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube
	I1008 15:02:31.240888  146984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:02:31.242413  146984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:02:31.244247  146984 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:02:31.244801  146984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:02:31.271217  146984 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1008 15:02:31.271332  146984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 15:02:31.337074  146984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-08 15:02:31.325606098 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1008 15:02:31.337200  146984 docker.go:318] overlay module found
	I1008 15:02:31.339135  146984 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1008 15:02:31.340433  146984 start.go:305] selected driver: docker
	I1008 15:02:31.340459  146984 start.go:925] validating driver "docker" against &{Name:functional-367186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-367186 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:02:31.340589  146984 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:02:31.342564  146984 out.go:203] 
	W1008 15:02:31.343899  146984 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 15:02:31.345192  146984 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh -n functional-367186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cp functional-367186:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2365031035/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh -n functional-367186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh -n functional-367186 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/98900/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /etc/test/nested/copy/98900/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/98900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /etc/ssl/certs/98900.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/98900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /usr/share/ca-certificates/98900.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/989002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /etc/ssl/certs/989002.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/989002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /usr/share/ca-certificates/989002.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "sudo systemctl is-active docker": exit status 1 (328.165018ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "sudo systemctl is-active containerd": exit status 1 (331.786211ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "374.054064ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.613505ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "376.816532ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.31303ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-367186 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-367186 image ls --format short --alsologtostderr:
I1008 15:02:33.542172  148571 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:33.542421  148571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.542431  148571 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:33.542435  148571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.542673  148571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:33.543264  148571 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.543364  148571 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.543827  148571 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:33.564928  148571 ssh_runner.go:195] Run: systemctl --version
I1008 15:02:33.564978  148571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:33.585989  148571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
I1008 15:02:33.692014  148571 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-367186 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-367186 image ls --format table --alsologtostderr:
I1008 15:02:34.219222  148919 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:34.219316  148919 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:34.219324  148919 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:34.219328  148919 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:34.219585  148919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:34.220197  148919 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:34.220315  148919 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:34.220756  148919 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:34.238753  148919 ssh_runner.go:195] Run: systemctl --version
I1008 15:02:34.238802  148919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:34.257995  148919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
I1008 15:02:34.361745  148919 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-367186 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"]
,"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"s
ize":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9
fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-367186 image ls --format json --alsologtostderr:
I1008 15:02:33.994699  148835 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:33.994964  148835 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.994973  148835 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:33.994977  148835 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.995183  148835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:33.995762  148835 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.995856  148835 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.996226  148835 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:34.014452  148835 ssh_runner.go:195] Run: systemctl --version
I1008 15:02:34.014516  148835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:34.032992  148835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
I1008 15:02:34.137947  148835 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-367186 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-367186 image ls --format yaml --alsologtostderr:
I1008 15:02:33.773787  148732 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:33.773905  148732 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.773915  148732 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:33.773920  148732 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.774129  148732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:33.774737  148732 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.774835  148732 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.775245  148732 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:33.792858  148732 ssh_runner.go:195] Run: systemctl --version
I1008 15:02:33.792911  148732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:33.811246  148732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
I1008 15:02:33.913854  148732 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh pgrep buildkitd: exit status 1 (273.334234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr: (2.947075536s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5e726090f0c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-367186
--> f64920b8512
Successfully tagged localhost/my-image:functional-367186
f64920b851225886356fb11e055c4e17fc89b77a8aaebcccb1a8a3363da75225
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-367186 image build -t localhost/my-image:functional-367186 testdata/build --alsologtostderr:
I1008 15:02:33.856616  148773 out.go:360] Setting OutFile to fd 1 ...
I1008 15:02:33.856877  148773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.856885  148773 out.go:374] Setting ErrFile to fd 2...
I1008 15:02:33.856889  148773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 15:02:33.857121  148773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-94984/.minikube/bin
I1008 15:02:33.857732  148773 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.858592  148773 config.go:182] Loaded profile config "functional-367186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 15:02:33.859277  148773 cli_runner.go:164] Run: docker container inspect functional-367186 --format={{.State.Status}}
I1008 15:02:33.877304  148773 ssh_runner.go:195] Run: systemctl --version
I1008 15:02:33.877499  148773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-367186
I1008 15:02:33.895558  148773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-94984/.minikube/machines/functional-367186/id_rsa Username:docker}
I1008 15:02:33.998858  148773 build_images.go:161] Building image from path: /tmp/build.1227119590.tar
I1008 15:02:33.998911  148773 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1008 15:02:34.009040  148773 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1227119590.tar
I1008 15:02:34.013514  148773 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1227119590.tar: stat -c "%s %y" /var/lib/minikube/build/build.1227119590.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1227119590.tar': No such file or directory
I1008 15:02:34.013543  148773 ssh_runner.go:362] scp /tmp/build.1227119590.tar --> /var/lib/minikube/build/build.1227119590.tar (3072 bytes)
I1008 15:02:34.032409  148773 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1227119590
I1008 15:02:34.041267  148773 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1227119590 -xf /var/lib/minikube/build/build.1227119590.tar
I1008 15:02:34.050005  148773 crio.go:315] Building image: /var/lib/minikube/build/build.1227119590
I1008 15:02:34.050075  148773 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-367186 /var/lib/minikube/build/build.1227119590 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1008 15:02:36.733513  148773 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-367186 /var/lib/minikube/build/build.1227119590 --cgroup-manager=cgroupfs: (2.683411456s)
I1008 15:02:36.733585  148773 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1227119590
I1008 15:02:36.742177  148773 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1227119590.tar
I1008 15:02:36.749915  148773 build_images.go:217] Built localhost/my-image:functional-367186 from /tmp/build.1227119590.tar
I1008 15:02:36.749960  148773 build_images.go:133] succeeded building to: functional-367186
I1008 15:02:36.749968  148773 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.552578273s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-367186
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdspecific-port716944872/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.636155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 15:02:25.536665   98900 retry.go:31] will retry after 363.885449ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdspecific-port716944872/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "sudo umount -f /mount-9p": exit status 1 (312.189586ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-367186 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdspecific-port716944872/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T" /mount1: exit status 1 (422.152376ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 15:02:27.531983   98900 retry.go:31] will retry after 448.854343ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-367186 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-367186 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816909114/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image rm kicbase/echo-server:functional-367186 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-367186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-367186 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-367186
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-367186
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-367186
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-497079 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-497079 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-497079 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-497079 --output=json --user=testUser: (1.222054318s)
--- PASS: TestJSONOutput/stop/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-849101 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-849101 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.037173ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"04e39eee-9e55-431a-b88f-0430ae037eb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-849101] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"736a58a7-b4c0-4058-b493-662489e67528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"3712d7de-0a95-4520-a6cf-3ac0b3d6c179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6baae027-9dd1-47d6-8f79-9e13cab34028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-94984/kubeconfig"}}
	{"specversion":"1.0","id":"98ee1b29-c930-43eb-92ff-e878eb950016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-94984/.minikube"}}
	{"specversion":"1.0","id":"3b7746a7-dfb5-493f-8889-4b64d1ff56f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d5b91a32-4dbd-4c0d-a3b0-101fb7e04818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6936138d-74b5-446d-9396-74a4be15f4c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-849101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-849101
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-647933 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-647933 --network=: (27.051203264s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-647933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-647933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-647933: (2.068213069s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-206445 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-206445 --network=bridge: (25.322723753s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-206445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-206445
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-206445: (1.936731877s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.28s)

                                                
                                    
x
+
TestKicExistingNetwork (25.05s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1008 15:39:29.739856   98900 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1008 15:39:29.757818   98900 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1008 15:39:29.757910   98900 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1008 15:39:29.757927   98900 cli_runner.go:164] Run: docker network inspect existing-network
W1008 15:39:29.774387   98900 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1008 15:39:29.774421   98900 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1008 15:39:29.774456   98900 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1008 15:39:29.774640   98900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 15:39:29.792403   98900 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004a40f0}
I1008 15:39:29.792466   98900 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1008 15:39:29.792522   98900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1008 15:39:29.849977   98900 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-004069 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-004069 --network=existing-network: (22.942793126s)
helpers_test.go:175: Cleaning up "existing-network-004069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-004069
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-004069: (1.967882135s)
I1008 15:39:54.777623   98900 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.05s)

                                                
                                    
x
+
TestKicCustomSubnet (28.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-512804 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-512804 --subnet=192.168.60.0/24: (26.473008173s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-512804 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-512804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-512804
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-512804: (2.146118757s)
--- PASS: TestKicCustomSubnet (28.64s)

                                                
                                    
x
+
TestKicStaticIP (25.67s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-210984 --static-ip=192.168.200.200
E1008 15:40:29.991502   98900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-94984/.minikube/profiles/functional-367186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-210984 --static-ip=192.168.200.200: (23.456371306s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-210984 ip
helpers_test.go:175: Cleaning up "static-ip-210984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-210984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-210984: (2.079339117s)
--- PASS: TestKicStaticIP (25.67s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-261998 --memory=3072 --mount-string /tmp/TestMountStartserial2165937132/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-261998 --memory=3072 --mount-string /tmp/TestMountStartserial2165937132/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.143013588s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-261998 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-280778 --memory=3072 --mount-string /tmp/TestMountStartserial2165937132/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-280778 --memory=3072 --mount-string /tmp/TestMountStartserial2165937132/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.413363154s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-280778 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-261998 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-261998 --alsologtostderr -v=5: (1.643322889s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-280778 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-280778
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-280778: (1.201426675s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-280778
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-280778: (6.954740898s)
--- PASS: TestMountStart/serial/RestartStopped (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-280778 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard